全 95 件のコメント

[–]aaronvoisine 48ポイント49ポイント  (11子コメント)

Excellent rebuttal to the "bitcoin doesn't scale" crowd.

I think the "UTXO set as of a certain block" argument could be further improved. What if instead of any random block, there were a set of well known checkpoints, with published and widely verified hashes of the UTXO set as of those checkpoints. Then this mode of partial blockchain download would have the same level of security as using the genesis block, since that too is trusted because it is a well known, widely verified value.

[–]aminok 6ポイント7ポイント  (0子コメント)

What if instead of any random block, there were a set of well known checkpoints, with published and widely verified hashes of the UTXO set as of those checkpoints.

Better yet, have UTXO commitments in block headers, and use the UTXO set a constant number of years in the past as a snapshot. That way you're trusting all of the proof of work that was built on top of it, rather than the parties publishing a hash.

[–]mustyoshi[🍰] 9ポイント10ポイント  (0子コメント)

Electrum does this in a way. By verifying each node has the same utxo set via hashes.

[–]seweso -1ポイント0ポイント  (5子コメント)

Isn't knowing/checking the difficulty enough? How is someone going to fake that?

[–]maaku7Mark Friedenbach - Bitcoin Expert 0ポイント1ポイント  (2子コメント)

That would mean it costs 25btc to create infinite bitcoins.

[–]bughi 2ポイント3ポイント  (1子コメント)

Please elaborate.

I think what /u/seweso meant is that when you receive only the last say 10000 blocks you can check the proof of work and know that it took a lot of computing power to generate those 10000 blocks.

How exactly would one create infinite bitcoins with 25?

[–]davout-bc -2ポイント-1ポイント  (1子コメント)

So if you manage to mine a block at a sufficient difficulty, it can include whatever nonsense you feel like?

[–]BTCisGod 0ポイント1ポイント  (0子コメント)

Huh, of course. Them are your miner machines ain't they? You can do whatever the fuck you want with 'em. There is not a soul on the planet that has to accept your block, however.

[–]searchfortruth 6ポイント7ポイント  (5子コメント)

[–]aminok -1ポイント0ポイント  (4子コメント)

I don't understand:

  1. why system wide resource usage increased at n2 and

  2. how system wide resource usage could increase at n2 while per user resource usage increases at only n. Who is expending the resources to maintain the system, if not the individual users?

EDIT: it turns out that when validator workload increases at N, and the number of validators increases at N, the system-wide workload increases at N2 :)

[–]bitskeptic 2ポイント3ポイント  (3子コメント)

N users doing O(N) work each totals O(N2)

[–]aminok -1ポイント0ポイント  (2子コメント)

Why does a user do N work? Why not a constant amount of work? And if users are each doing O(n²) work, why is the argument being made that per user resource usage is O(n), while only system-wide resource usage is O(n²)?

[–]bitskeptic 2ポイント3ポイント  (1子コメント)

Because blockchain validation requires validating all transactions in the network.

[–]aminok 0ポイント1ポイント  (0子コメント)

Ok that makes sense. I'm conflating the amount of txs generated by each user with the amount of work each full validator needs to do. The latter does indeed increase at O(n), which would make system wide resource usage increase at O(n²) if the percentage of n that validates remains constant.

[–]thieflar 31ポイント32ポイント  (1子コメント)

Brilliantly said. Gavin continues to make good points.

[–]searchfortruth 8ポイント9ポイント  (3子コメント)

Shouldn't the last calculation be: each validating node verifies na transactions where a = average number of transactions per user. Since there are n/100 nodes the total work is ann/100. But I agree the right thing to look at is per node cost which is simply an. If this is where this O(n2) idea (adding work across nodes) comes from it is disingenuous.

Let's say a piece of software presented a sorted list of users at startup for every user that was currently running the software. Would that sort be considered O(nlogn) or O(nnlogn)? To me this software scales according to the former.

[–]aminok 1ポイント2ポイント  (0子コメント)

Big O notation ignores constants.

[–]coinaday 1ポイント2ポイント  (0子コメント)

Came to comments for O(nlogn); not disappointed.

I was going to pitch it for the rate of transaction growth, at a complete guess. I would think there would be some increase in the number of transactions per user as the number of users scales, but I agree that with the post the it's likely to be strictly less than O( n2 ).

[–]d4d5c4e5 -1ポイント0ポイント  (0子コメント)

In the example that you're giving with the startup sort, the local client itself would be performing the sort, so the big-O properties would just be whatever the nature of the sorting algorithm is (and not additionally multiplied by some n for the number of peers, because that's already accounted for in the size of the set being sorted).

Bitcoin is different in my opinion for exactly the reasons you're stating.

[–]shesek1 15ポイント16ポイント  (6子コメント)

I might be missing something completely obvious here, but that "you don't need the whole history, just get the utxos from random peers, and if they lie to you, its okay - you'll just see the transaction doesn't get confirmed" argument makes no sense to me and has circular logic. For other nodes to know that the transaction isn't valid, they must hold their own valid copy of the history. If everyone [or large parts of the network] behave in the manner he's describing, Bitcoin would be utterly broken. You'll have nodes that have no way to know which transactions are valid and should be relayed/mined, other than trusting other nodes to do so (and, again, not being able to validate they're behaving correctly).

Also, his "this is the same behavior we already have today due to the possibility of double spend" argument seems nonsensical. How are these two completely different scenarios the same?

Finally, the two explanations he's giving for why people claim Bitcoin scales as O(n^2) are explanations that I never saw before anywhere... the explanation that is being commonly used (which originated from adam, I believe peter, I'm being told) is referenced only at the end.

I must be missing something here, right? Can someone please help me make sense out of this? That whole post seems to be really, utterly, obviously, factually wrong.

Edit: for the first point, this could perhaps make some sense as a low-security high-trustfullness wallet mode where you blindly trust miners. But then, you just drop to SPV-level security, which we already have. Fetching the utxos set, when you know you can't trust them, doesn't add anything to the equation.

(the quotes in this comment are my own paraphrasing, not original quotes from the post)

[–]nikize 11ポイント12ポイント  (0子コメント)

Sending one transaction to a specific peer and a double spend to the rest of the network is on the same level as sending an invalid utxos to that peer, the attack vector is thus the same. If the majority of the network relied on this technique there would certainly be problems, but it is more likely to be several full validating nodes. The main point here is that you could use this to query some 3rd party node, or you could run one node yourself that you trust - or you can double check the result against several nodes. Some users might trust one random peer to tell them their truth, and just taking the risk of having an incorrect truth that might bite them later - most of the time that will work just fine. But you are not forced to use it that way.

Even if you have not seen the different variants of why it does not scale, Gavin just debunked them all. It all made sense to me and It's quite obvious that the "factually wrong" bunch is the ones that came up with any of those O(n2) variants to begin with.

[–]aj6dh4l0001kmkkk34 2ポイント3ポイント  (1子コメント)

to the first point: i think a trusted subset of the 10k or so full nodes he mentioned would be the source of utxo set hash for joe average's node/wallet/client. he could get the full set from anyone. also trusted here just means it's expected that given a random handful of these known nodes it's sufficiently unlikely they would all be colluding to attack.

[–]shesek1 1ポイント2ポイント  (0子コメント)

He did not mention any trusted parties in the post, as far as I can tell. And if it is based on a trusted party model, its quite a big change to the security and decentralization properties of bitcoin.

[–]freework 0ポイント1ポイント  (0子コメント)

you just drop to SPV-level security

This is a big musconception. Have you ever heard of someone losing bitcoin because they were using an SPV wallet with reduced security? I never have. When you lose bitcoin, it is because someone screwed up (either the developers of your wallet, or you the wallet user)

The only security difference between SPV and full node is theoretical. An SPV wallet is more vulnerable to theoretical attacks. In real world terms they are exactly the same security wise.

[–]seweso -1ポイント0ポイント  (1子コメント)

If you are on an attackers blockchain then they can also confirm any transaction they like. They only need to make sure both chains are so much the same that you don't notice you are on a private blockchain. Until its too late.

But just looking at the difficulty would be enough to check whether you are on a private chain. So if an attacker completely controls all your connections to the bitcoin network AND you don't already have the blockchain (or a checkpoint) then only the difficulty is what would still give it away.

[–]BTCisGod 0ポイント1ポイント  (0子コメント)

Plus the fact that all publicly available block explorers are also dishing out bogus data.

[–]hodlgentlemen 12ポイント13ポイント  (6子コメント)

I would like to see a good rebuttal by u/petertodd. I lean towards accepting Gavin's point but I am prepared to change my mind. Peter seems to be the one who brought up the Big-O argument (IIRC).

[–]go1111111 9ポイント10ポイント  (1子コメント)

Here's a reddit thread where Mike Hearn, Adam Back, and Greg Maxwell argue about this.

The main takeaways are:

-Mike and Greg argue about whether the # of full nodes actually needs to scale linearly with users. The O(N2) analysis assumes that it does.

Regardless of whether full nodes scale linearly with users, the per-node cost is O(N) either way. IMO that's what matters.

[–]searchfortruth 4ポイント5ポイント  (0子コメント)

/u/awemany does a great job cutting through the issues. Adam is not assuming every user transacts with every other. His n squared comes from looking at cost over all nodes or system wide costs. He agrees per node costs are order n.

[–]shesek1 3ポイント4ポイント  (2子コメント)

I believe it was Adam.

Edit: in reply to me saying it was Adam on IRC: <gmaxwell> shesek: Actually that particular argument isn't originally from adam, most people know it via petertodd, though it's older than petertodd's use-- maybe dan kaminsky? who knows it's a fairly straight forward observation

[–]searchfortruth 1ポイント2ポイント  (0子コメント)

Can you point out where Peter or Adam explain their reasoning?

[–]hodlgentlemen 2ポイント3ポイント  (0子コメント)

In that case, Peter ran with it

[–]sqrt7744 -2ポイント-1ポイント  (0子コメント)

Hasn't brought any yet, why would he start now?

[–]veqtrus 8ポイント9ポイント  (1子コメント)

ITT: redditors are outraged that not everyone is willing to process all their transactions for free.

[–]aminok 0ポイント1ポイント  (0子コメント)

If Redditors are wrong, then demonstrate it with rational arguments. Retreating to ad hominem shows both a lack of integrity and a lack of confidence in the rationality of your argument.

[–]d4d5c4e5 3ポイント4ポイント  (0子コメント)

I feel n2 is a non-sequitor because the total scaling of the entire network doesn't matter, it's not like a single entity is running the entire Bitcoin network. The n2 load is distributed across n nodes, meaning that each node shoulders O(n) load.

[–]hodlgentlemen 6ポイント7ポイント  (1子コメント)

An excellent piece. Thanks Gavin.

[–]belcher_ 3ポイント4ポイント  (2子コメント)

The second thing wrong with that argument is that while the entire network might, indeed, perform O(n2) validation work, each of the n individuals would only perform O(n) work– and that is the important metric, because each individual doesn’t care how much work the rest of the network is doing to validate transactions, they just care about how much work their computer must do.

I don't think anyone disagrees with this. O(n) is written there as though it's okay.

The problem has always been about every individual node in the network having to verify every coffee purchase.

Let me just copypaste from wikipedia

Some early peer-to-peer (P2P) implementations of Gnutella had scaling issues. Each node query flooded its requests to all peers. The demand on each peer would increase in proportion to the total number of peers, quickly overrunning the peers' limited capacity. Other P2P systems like BitTorrent scale well because the demand on each peer is independent of the total number of peers.

If you look at the current design of gnutella, you'll find it does not use flooding so much anymore, instead theres other classes of leaf nodes and ultrapeers.

[–]go1111111 2ポイント3ポイント  (0子コメント)

O(n) is written there as though it's okay.

It might be OK, depending on how technological growth compares with adoption.

[–]thieflar 9ポイント10ポイント  (0子コメント)

O(n) is written there as though it's okay.

No, O(n) is written there as though it's not O(n2). And, to be sure, it is not. It is O(n).

[–]Yoghurt114 1ポイント2ポイント  (1子コメント)

"Everyone does everything", that's bitcoin's broadcast network, it's textbook O(n2) scaling clear and simple.

And although this cow may not be as spherical as the above statement, ignoring bitcoin's scalability is not something I like being in the business of.

[–]aminok 0ポイント1ポイント  (0子コメント)

To quote /u/d4d5c4e5:

I feel n2 is a non-sequitor because the total scaling of the entire network doesn't matter, it's not like a single entity is running the entire Bitcoin network. The n2 load is distributed across n nodes, meaning that each node shoulders O(n) load.

[–]laurentmt 2ポイント3ポイント  (0子コメント)

I strongly disagree with the conclusions of this post.

There are two things wrong with this argument; first, the assumption that a constant proportion of users will run full nodes as the network grows might be incorrect.

This is a good example of a self-fulfilling prophecy. Indeed, if we favor a solution increasing the requirements/burden to run a full node, it's likely that we will see less and less people running a full node. This is the whole point of the discussions about the differences between the SPV model and layer 2 solutions (like the Lightning Network).

The second thing wrong with that argument is that while the entire network might, indeed, perform O(n2) validation work, each of the n individuals would only perform O(n) work– and that is the important metric.

This is wrong. A network/system consuming resources in O(n²) while providing value in O(n) is doomed to fail because too expensive. The total work IS an important metric.

[–]KayRice 1ポイント2ポイント  (2子コメント)

For the O( n2 ) crowd to be right it means I would be connecting to every peer when I read or write data to the network.

[–]veqtrus 2ポイント3ポイント  (1子コメント)

Every full node verifies transactions from all peers regardless of whether it is connected to them.

[–]aminok 0ポイント1ポイント  (0子コメント)

Yes but transactions from all peers increase at O(n), not O(n²).

[–][削除されました]  (12子コメント)

[deleted]

    [–]Yoghurt114 7ポイント8ポイント  (11子コメント)

    XT is primarily Mike Hearn's.

    BIP 101 is Gavin's, not Peter Todd.

    LN is a proposal by Joseph Poon and Thaddeus Dryja, they are not affiliated with Blockstream. The concept they describe is well known for years, though their solution is novel. Many companies/people (including Blockstream) is working on an implementation.

    Core developers at Blockstream (and elsewhere, Jeff Garzik's BIP 100 and 102 for example) have several concrete (to lesser and greater degrees) proposal such as Pieter Wuille's BIP 103, Gregory's flexcap, and Adam's 2-4-8 schedule aswell as extension blocks.

    [–]aminok -1ポイント0ポイント  (10子コメント)

    A little off-topic, but I really don't understand the point of Adam's 2-4-8 schedule. We're just going to have to do a re-do of all of this in five years or whatever, and with a larger, more fractious community. Whatever new information about scaling and the block size limit we learn in that time will be canceled out by new developments and therefore uncertainties about scaling and the block size limit, meaning we will be in the same place we are now in terms of being able to make accurate predictions about the future.

    Unless the plan is to do a can-kick, 2-4-8 type increase every few years, which means this whole debate will hang over the Bitcoin economy forever, and market players won't be able to make long term plans around Bitcoin, since the protocol's scalability qualities will be in flux.

    [–]smartfbrankings -1ポイント0ポイント  (9子コメント)

    The plan is to can-kick until scalable solutions are found.

    [–]aminok 0ポイント1ポイント  (8子コメント)

    A scalable solution is to increase validator workload at O(N) as the network increases its throughput.

    What you're really saying is that the Bitcoin economy should be put on hold unless a scalable solution that doesn't reduce decentralization, as you define it, is found.

    You're previously called Bitcoin investors who lost money "bagholders", and said we shouldn't plan to scale Bitcoin to serve a billion people, so I wouldn't be surprised if you have no problem with the hundreds of VC backed companies that are trying to create something in the Bitcoin space fizzling out, because Bitcoin stagnated while waiting for a magical scaling solution that might never come.

    [–]smartfbrankings 0ポイント1ポイント  (7子コメント)

    Bitcoin without decentralization is a worthless project.

    I've called those who are willing to throw out the core reason why Bitcoin is different to get a short term bump in price desperate bagholders.

    We should plan on scaling Bitcoin to what it can support without giving up on what makes it unique, not to some arbitrary value and throw the baby out with the bathwater.

    [–]aminok 0ポイント1ポイント  (6子コメント)

    No one said Bitcoin should lose its decentralization. The arguments being made for larger blocks are the following:

    • Block size can be increased without reducing decentralization by limiting the rate of increase to the rate at which bandwidth grows

    • Decentralization can be reduced significantly without Bitcoin's level of decentralization falling to below the level needed to remain censorship resistant.

    • It's the percentage of the world population, and not the percentage of the Bitcoin userbase, that validates, that defines the level of decentralization, and it's entirely possible the former will not suffer as the network's transaction throughput increases.

    Claiming Bitcoin will become centralized with any straightforward O(N) scaling solution is fearmongering IMO. There are risks with any solution, and the risk of stagnation, which contains within it the potential for a significant opportunity cost loss, as well as lower resistance to political attack, is totally ignored by analyses like yours.

    [–]smartfbrankings 0ポイント1ポイント  (5子コメント)

    Validation is one part. Mining centralization is another. I'm far more concerned about miner centralization, and we already are too centralized.

    [–]aminok 0ポイント1ポイント  (4子コメント)

    So then implement IBLT or some other propagation compression scheme. Voila, the problem of miner centralization as caused by larger blocks diminishes significantly.

    As I said, there are risks with any solution, including with a solution that slows down Bitcoin's growth. The problem is there is no appreciation for the risks of not scaling soon and fast in most of these anti-large-block analyses.

    [–]smartfbrankings 0ポイント1ポイント  (3子コメント)

    Except for selfish mining, where slow propagation is an advantage.

    What are the risks of not scaling fast enough? We won't get a bunch of Vulture Capatalists putting their parasitic additions onto the blockchain giving no value to Bitcoin, but having us secure it for them?

    [–]muyuu 1ポイント2ポイント  (3子コメント)

    I don't know who said that people would transact with O( n2 ) other people as the network grew. That's just completely unfounded.

    [–]searchfortruth 10ポイント11ポイント  (1子コメント)

    I think we all have a hard time understanding where this n squared theory comes from. But people have been throwing it around.

    [–]muyuu 1ポイント2ポイント  (0子コメント)

    Maybe they're talking about the space of possible targets? but why would there be a requirement to transact to everybody or even a fixed ratio of the total members in the network? that just doesn't apply as a scaling requirement.

    [–]110101002 1ポイント2ポイント  (0子コメント)

    It is superlinear, he will transact with more people using Bitcoin as Bitcoin grows, however it is likely not O(N2 ).

    [–]jaamfan 1ポイント2ポイント  (1子コメント)

    Made me think of The Big-O from Zeta Gundam

    [–]KayRice -4ポイント-3ポイント  (6子コメント)

    Wasn't this posted originally by another user and removed by mods?

    [–]BashCo[M,🍰] 1ポイント2ポイント  (5子コメント)

    No, it was not.

    [–]KayRice 4ポイント5ポイント  (4子コメント)

    They done being Nazi here or just this one post made it through?

    [–]BattlescarScholastca -5ポイント-4ポイント  (4子コメント)

    Network is a shared resource and bitcoin is O(n²) in that space, no?

    [–]untried_captain 0ポイント1ポイント  (1子コメント)

    How dare you say such a thing!

    [–]BattlescarScholastca -1ポイント0ポイント  (0子コメント)

    Luckily all the replies explained things and set me straight.

    [–]aminok 0ポイント1ポイント  (1子コメント)

    Could you take the time to actually respond to Gavin's arguments against the O(n²) claim?

    [–]BattlescarScholastca 1ポイント2ポイント  (0子コメント)

    Gavin's arguments in the domains addressed appear to be correct.