until we have size-independent new block propagation



Between all the flames on this list, several ideas were raised that did

not get much attention. I hereby resubmit these ideas for consideration and

discussion.

- Perhaps the hard block size limit should be a function of the actual

block sizes over some trailing sampling period. For example, take the

median block size among the most recent 2016 blocks and multiply it by 1.5.

This allows Bitcoin to scale up gradually and organically, rather than

having human beings guessing at what is an appropriate limit.



--

--

Gavin Andresen





------------------------------------------------------------------------------



I don't really believe that is possible. I'll argue why below. To be clear,this is not an argument against increasing the block size, only againstusing the assumption of size-independent propagation.There are several significant improvements likely possible to variousaspects of block propagation, but I don't believe you can make any partcompletely size-independent. Perhaps the remaining aspects result in termsin the total time that vanish compared to the link latencies for 1 MBblocks, but there will be some block sizes for which this is no longer thecase, and we need to know where that is the case.* You can't assume that every transaction is pre-relayed and pre-validated.This can happen due to non-uniform relay policies (different codebases, andfuture things like size-limited mempools), double spend attempts, andtransactions generated before a block had time to propagate. You'vepreviously argued for a policy of not including too recent transactions,but that requires a bound on network diameter, and if these latetransactions are profitable, it has exactly the same problem as makinglarger blocks non-proportionally more economic for larger pools groups ifpropagation time is size dependent).* This results in extra bandwidth usage for efficient relay protocols,and if discrepancy estimation mispredicts the size of IBLT or errorcorrection data needed, extra roundtrips.* Signature validation for unrelayed transactions will be needed at blockrelay time.* Database lookups for the inputs of unrelayed transactions cannot becached in advance.* Block validation with 100% known and pre-validated transactions is notconstant time, due to updates that need to be made to the UTXO set (andfuture ideas like UTXO commitments would make this effect an order ofmagnitude worse).* More efficient relay protocols also have higher CPU cost forencoding/decoding.Again, none of this is a reason why the block size can't increase. Ifavailability of hardware with higher bandwidth, faster disk/ram accesstimes, and faster CPUs increases, we should be able to have larger blockswith the same propagation profile as smaller blocks with earlier technology.But we should know how technology scales with larger blocks, and I don'tbelieve we do, apart from microbenchmarks in laboratory conditions.--PieterA lot of people like this idea, or something like it. It is nice andsimple, which is really important for consensus-critical code.With this rule in place, I believe there would be more "fee pressure"(miners would be creating smaller blocks) today. I created a couple ofhistograms of block sizes to infer what policy miners are ACTUALLYfollowing today with respect to block size:Last 1,000 blocks:http://bitcoincore.org/~gavin/sizes_last1000.htmlNotice a big spike at 750K -- the default size for Bitcoin Core.This graph might be misleading, because transaction volume or fees mightnot be high enough over the last few days to fill blocks to whatever limitminers are willing to mine.So I graphed a time when (according to statoshi.info) there WERE a lot oftransactions waiting to be confirmed:http://bitcoincore.org/~gavin/sizes_357511.htmlThat might also be misleading, because it is possible there were a lot oftransactions waiting to be confirmed because miners who choose to createsmall blocks got lucky and found more blocks than normal. In fact, itlooks like that is what happened: more smaller-than-normal blocks werefound, and the memory pool backed up.So: what if we had a dynamic maximum size limit based on recent history?The average block size is about 400K, so a 1.5x rule would make the maxblock size 600K; miners would definitely be squeezing out transactions /putting pressure to increase transaction fees. Even a 2x rule (implying800K max blocks) would, today, be squeezing out transactions / puttingpressure to increase fees.Using a median size instead of an average means the size can increase ordecrease more quickly. For example, imagine the rule is "median of last2016 blocks" and 49% of miners are producing 0-size blocks and 51% areproducing max-size blocks. The median is max-size, so the 51% have totalcontrol over making blocks bigger. Swap the roles, and the median ismin-size.Because of that, I think using an average is better-- it means the max sizewill change (up or down) more slowly.I also think 2016 blocks is too long, because transaction volumes changequicker than that. An average over 144 blocks (last 24 hours) would bebetter able to handle increased transaction volume around major holidays,and would also be able to react more quickly if an economically irrationalattacker attempted to flood the network with fee-paying transactions.So my straw-man proposal would be: max size 2x average size over last 144blocks, calculated at every block.There are a couple of other changes I'd pair with that consensus change:+ Make the default mining policy for Bitcoin Core neutral-- have its targetblock size be the average size, so miners that don't care will "go alongwith the people who do care."+ Use something like Greg's formula for size instead of bytes-on-the-wire,to discourage bloating the UTXO set.---------When I've proposed (privately, to the other core committers) some dynamicalgorithm the objection has been "but that gives miners complete controlover the max block size."I think that worry is unjustified right now-- certainly, until we havesize-independent new block propagation there is an incentive for miners tokeep their blocks small, and we see miners creating small blocks even whenthere are fee-paying transactions waiting to be confirmed.I don't even think it will be a problem if/when we do have size-independentnew block propagation, because I think the combination of the random timingof block-finding plus a dynamic limit as described above will create ahealthy system.If I'm wrong, then it seems to me the miners will have a very strongincentive to, collectively, impose whatever rules are necessary (maybe asoft-fork to put a hard cap on block size) to make the system healthy again.