Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.Sign up
DoS limits #14
After running some numbers, I redefined transaction "cost". What's specified in this PR allows a max extension block size of 6mb and an average case ext. block size limit of 2mb. I think this is fairly reasonable, but I'd like feedback. We could always start lower (1mb) and do a soft-fork upgrade to 2mb later.
As for code, the dev branch has also been updated to reflect these changes.
A costly mistake was made in Bitcoin before, to encode a fixed block size limit into the protocol during a time of crisis.
In the interest of avoiding further mistakes, let's discuss this.
My original calculations were off a bit. Here is the data for the last 1000 ~1mb mainchain blocks when applying the "cost" algorithm to them: https://gist.github.com/chjj/af70a21b539746efbb5a6f724a3715af
This considers regular P2PKH to be WP2PKH and P2SH to be P2WSH. It also considers that the input script's size should be calculated as if it were a witness vector.
If we simply want a straight capacity increase, this means that an average case block could be limited to roughly 2mb with something like:
Whether to treat the output cost more harshly is another question that I'm sure will come up (and has). I don't have personal feelings either way. As far as technicals go, the idea of keep less pressure off leveldb can be appealing. Going that route would simply turn
I'm currently working on some benchmarks for worst and average case blocks with these updated numbers. From my initial findings, an average case 1.7mb extension block at the current cost limit (mentioned above), verification time in bcoin is about 900-1000ms on an i7. This didn't properly account for JIT warmup though, so that number may be a lot higher than it should be.
Truly testing mainnet validation time would require creating a chain database which stores 45m+ utxos (the fact is, every utxo/utx adds a key to the database, which potentially causes another recursive split of nodes up the branch, maybe increase the depth of the leaves, etc. -- at least in say a B+ tree. Not sure how leveldb's LSM tree works as well as I do more traditional database data structures. But the point is, the amount of keys increases key lookup times. It will take a lot of extra work to accurately estimate verification times).
Personally, I think a lot of that work is partly meaningless. As far as how this works in practice, there's no way to tell without introducing it into a system with real economic actors involved. No matter how many benchmarks we run or simulations we create, it's not going to replicate bitcoin as it exists. For this reason, I think it's important to avoid bikeshedding too much on this issue. We can try to make it as safe as possible, but putting it out there is the always real test for anything.