全 189 件のコメント

[–]MyDixieWreck4BTC 22ポイント23ポイント  (4子コメント)

I submitted 10,000 transactions. While my script was hammering the network, I was following blockchain.info's site, and thought they were not propagating in the bitcoin network. Woke up this morning surprised to see all 10,000 transactions were confirmed.

I also corrupted my wallet file, didn't have a backup, and lost the remaining coins I was testing with (not much, but still...). :(

[–]LucZC 17ポイント18ポイント  (1子コメント)

You corrupted your wallet by using it intensively? Core developpers may be interested into having a look and trying to reproduce the problem on a test net. It may be a good idea to contact them.

[–]MyDixieWreck4BTC 0ポイント1ポイント  (0子コメント)

Was probably my fault. I renamed wallet.dat while bitcoin core was still shutting down.

[–]CryptoVape 2ポイント3ポイント  (0子コメント)

Thanks for the deflation!

[–]hellobitcoinworld 53ポイント54ポイント  (16子コメント)

20Mb blocks just make this attack 20x more expensive, which is still trivial to a true enemy of Bitcoin.

This point has me concerned. Forget 51% attacks. Flooding the network is far cheaper. We need protection against the ability to do this. I like Op's proposed solutions.

[–]123btc321 11ポイント12ポイント  (4子コメント)

During the peak flood my transactions went through in the next block.....transaction fees ftw.

[–]nuibox 2ポイント3ポイント  (2子コメント)

Yes. I sent 10 or so breadwallet transactions during the test and 1 coinbase transaction immediately after. The coinbase cleared next block. The breadwallet ones took 3 or 4 hours and all went in the same block.

[–]subpar42 1ポイント2ポイント  (1子コメント)

I think Aaron mentioned using slightly revised fee structure going forward on breadwallet

[–]nuibox 0ポイント1ポイント  (0子コメント)

It would have been unfortunate if I was buying something. But since I was just tipping I didn't really care when they cleared. An option would be nice though.

[–]pcvcolin 0ポイント1ポイント  (0子コメント)

Exactly. I didn't have a problem during this "peak flood." Sounds like someone is just trying to use this event to advocate for extreme approaches on fees and very big blocksize. I believe, btw, that the developers have already been examining for some time how to dynamically adjust fees (and, I've seen this being discussed actively since 2013 and by July 2014 it was fairly well developed, correct me if I'm wrong, but I don't think the fee stuff has changed in Core since Feb 2015...), however, I do not see justification for pushing fees up toward infinity. Dynamically adjust them, yes, allow them to be spiraled upward crazily, nope.

[–]coinnoob 40ポイント41ポイント  (4子コメント)

I like Op's proposed solutions.

??? increase fees based on the mempool size in bytes ???

first of all, there is no "the mempool", everyone has their own individual mempool and it would be impossible to tell if transaction fees were paid appropriately.

also, it costs very little to spam the mempool with 20mb of junk data, why would miners ever let the mempool become less than 100% full?

this idea is easily the most trivially exploitable and attackable idea i have ever seen proposed on this forum.

A few things were discovered:

nothing was "discovered". everything on this list has been known and well-documented for years.

[–]Cygnus_X 3ポイント4ポイント  (0子コメント)

I'm saying this with only about 10 minutes of thought put into it, but...

What if instead of using mempool, you looked at how full the last ~50 blocks were (counting only transactions that included a fee. or, giving no-fee transactions some sort of partial count modifier. As an example, a transaction with fee counts as 1, a transaction without fee counts as .1)?

This would incentive miners to fill every block, and not use the artificial .75 mb limit.

You could roll out the fee change (ie, increase when full) with the new block size increase, giving an incentive to miners to hard fork. Fees can only go up with this system.

Unless you controlled 51% of hashing power, spamming the system with transactions would result in a net loss (even with a fee increase, you'd also have to increase your fee for sending, and you'd lose more than you gained).

Spamming the system with transactions could only be done in shorter intervals. The longer it is done, the more costly it will become.

[–]BS_Filter 3ポイント4ポイント  (0子コメント)

Let's try and look at the positive here. Everything on the list has become easier to understand for me. I didn't realize miners were capping blocks. So we learned something, and I'm sure you yourself may decide to do something more useful to you with the test data, because it's better than just criticizing others. Perhaps you'll discover something you didn't know, just like OP has. Those are all good things.

[–]sroose 1ポイント2ポイント  (0子コメント)

Tx priority, as opposed to mempool size, can be calculated unambiguously on any machine. So in theory, adapting fee requirements based on tx priority is possible and makes a lot more sense.

[–]coinaday 0ポイント1ポイント  (0子コメント)

Aye. As far as I'm concerned, this shows that there is no problem. We do have dynamically changing transaction fees (the user can change their transaction fees to whatever they like) and that worked perfectly. Boom. No problem.

Sure, it'll be good to get the larger block sizes for future growth in capacity, but it's clear that transactions that need to go through will go through (because they'll put a sufficient fee on).

[–]btcxr 1ポイント2ポイント  (1子コメント)

This was just then first stress test ad well. Which means lots of people were not aware of this or couldnt get ready in time. We should plan on doing this at the end of the month every month till we really hound the system and lots of people are aware of the testing. If only a hand full of people participated then what will happen when a hundred people script on the next test?

I know you can add me to the list that will hound it next time. I completely missed this test.

[–]locster -1ポイント0ポイント  (0子コメント)

hand full

handful

[–]hellyeahent 3ポイント4ポイント  (0子コメント)

I do not. Imagine you are miner with 20-30% chance of finding block. Block is 1mb like now and fee is huge for 800-1000kb . So they will spend lets say 0,5 btc to make block big and let unaware people pay few times more than that in fees.

Default fee should be as low as it is now, if your transaction is important you should pay more. In future we should increase fee though and block size too or at least have emergency plan like: if number of transaction which wait for confirmation is more than 5k increase default fee 2x, if 10k default fee 4x etc etc That way somebody will have hard time to flood the network

[–]Fiach_Dubh 0ポイント1ポイント  (0子コメント)

my concern is that this script allows for the DDOSing of the network as it currently stands. It takes an army of anons to do this, but it's possible with bad actor trolls like 4chan.

The longer the attack, the greater the impact, but imagine having a week of 4hour confirmation times...could be an interesting event since they could conceivably hold the network hostage. They could cash out when offered a bounty, or by hedging BTC with NBT or cash and converting the price takes a significant enough hit to buy in after the dust settles, or do both.

[–]PirateoftheApes 0ポイント1ポイント  (0子コメント)

Or you have transaction fees climb which eliminates spam at the root. There's less ability/incentive to spam the network if it's expensive to do so, no? Isn't it trivial to spam the network with additional transactions if it's free to send them?

[–]laurentmt 19ポイント20ポイント  (6子コメント)

Another point:

  • 13 blocks were mined between 23:38 and 03:29 (instead of theoretical 24 blocks)

  • on a larger scale, 31 blocks were mined between 00:56 and 09:00 (instead of theoretical 48 blocks)

I see 2 possibilities:

  • these results are caused by variance in mining and it's just a strange coincidence.

  • beyond the blocksize limit, there are others factors limiting scalability.

[–]laurentmt 6ポイント7ポイント  (0子コメント)

A few more data:

  • I call "Test period" the 8 hrs between 00:56 and 09:00, based on this chart of the mempool http://imgur.com/TTrg0g5 (from statoshi.info)

  • 54 blocks were mined during the 8hrs preceding the test (16:56 - 00:56)

  • 78 blocks were mined during the 8hrs after the test (09:00 - 17:00).

Once again, it might be a coincidence but the coincidence is intriguing.

[–]Apatomoose 5ポイント6ポイント  (1子コメント)

How full blocks are doesn't have any effect on mining difficulty. it's a coincidence.

[–]laurentmt 5ポイント6ポイント  (0子コメント)

Yep. The number of transactions has no impact on hashing & difficulty. But it may have an impact on the time required for validation of transactions by mining pool operators.

I guess that most mining pools use custom softwares. How do these softwares react to an important increase in the number of transactions ?

It would be interesting to get some insights from experts in mining...

[–]cryptonaut420 8ポイント9ポイント  (2子コメント)

I have noticed lately in general it seems to be taking longer on average for each block to be found. We are likely due for another difficulty decrease pretty soon

[–]GibbsSamplePlatter 9ポイント10ポイント  (12子コメント)

This is actually pretty interesting. Seems the network did just fine, and as long as it isn't long-term, even wallets today act just fine.

Longer-term clearly wallets are going to have to allow fee bumps.

Maybe those wallet devs that are begging for larger blocks could get on this??

[–]smartfbrankings 3ポイント4ポイント  (3子コメント)

Bigger concern is having nodes just dump transactions from their mempool at certain limits, especially low-fee transactions.

Even longer term solution is nodes that require payments to interact with. This solves SOOO much but is a much bigger effort.

[–]GibbsSamplePlatter 0ポイント1ポイント  (0子コメント)

Yeah the long-long term economics are pretty interesting. Perhaps miners wanting to know what's in blocks before they arrive is enough to incentive?

[–]coinaday 0ポイント1ポイント  (1子コメント)

Even longer term solution is nodes that require payments to interact with. This solves SOOO much but is a much bigger effort.

Ooh, that's a cool idea. That could probably be done with a specially forked client. Normal clients wouldn't connect to it, modified clients could connect to normal nodes as normal or to the special nodes and there would be a new protocol for the fees list or negotiating / confirming payment or whatever. I don't see any reason why a premium-node system would require any core changes, although of course that would be nicer to make it standardize eventually.

But it seems like it would make sense to try something like that out separately first since it could work like that.

[–]smartfbrankings 1ポイント2ポイント  (0子コメント)

It certainly would require a new client. I've had similar ideas, I've seen /u/justusranvier propose it as well. There are no core changes needed for it.

[–][削除されました]  (1子コメント)

[deleted]

    [–]eyal0 1ポイント2ポイント  (0子コメント)

    It's possible to modify the fee after it's sent by respending it, if the miners can figure out to ditch the old transaction for the new one. That make double spending easier, however.

    Another option is to spend the result of the first transaction with a fee. Miner software would be incentivized to push the first transaction through in order to push the second transaction.

    [–]cashstronaut -1ポイント0ポイント  (5子コメント)

    Seems the network did just fine

    Uhh...

    24,000 unconfirmed

    This clogged up the blocks for approximately 4 hours

    required 3-4hours for the first confirmation

    [–]mustyoshi 3ポイント4ポイント  (0子コメント)

    This just in: When there are lots of transactions, those who pay less to the miners get pushed to the end of the line.

    This is the free market at work.

    [–]GibbsSamplePlatter 7ポイント8ポイント  (3子コメント)

    If you think Bitcoin(as-is with current design) is destined to be a "quick, instant, free" payment solution, then it's already failed. I'm talking about consensus and keeping full nodes/miners running.

    [–]Naviers_Stoked 6ポイント7ポイント  (2子コメント)

    Well, in general, it is a quick, almost free payment solution. I think the stress test shows that it can't be classified that way in its current implementation with a much larger userbase.

    [–]paperraincoat 4ポイント5ポイント  (1子コメント)

    People are a little edgy around here - ie 'Bitcoin is a failure if everyone can't use it to pay for coffee.' There is, and may have to be, a middle ground. Gold bugs don't use flakes of gold to pay for coffee either, yet it's still $1300 an oz.

    Yes, cue the comment about centralization - with two differences. You can still sidestep banks and own Bitcoins, transfer them at will, and nobody can inflate the currency.

    [–]Naviers_Stoked 2ポイント3ポイント  (0子コメント)

    I think most people would like bitcoin to do it all - Be the go-to payment mechanism for online purchases, in-person purchases, a better store of value than gold, a means for moving/storing asset ownership, etc.

    I think the fact it's unclear whether bitcoin can fulfill all those use cases makes some people call it a failure and others laugh at that idea because it only needs to really capitalize on one of those to be a resounding success.

    [–]binaryFate 14ポイント15ポイント  (8子コメント)

    We need fee's to automatically scale as the mempool becomes filled

    In this case the attack would not target filling the blocks, but would anyway have the same effect of delaying transactions of the average Joe with a normal fee. Average Joe would need to wait or send with a higher fee (and would have to guess, higher by how much, and maybe re-re-send, etc.). Instead of competing for space, transactions would compete for space+fees. I fail to see how it would mitigate the effects of such an attack.

    [–]xcsler 4ポイント5ポイント  (7子コメント)

    My understanding is if the attacker has to pay fees to make bitcoin transactions eventually they run out of money. Bitcoin fees were modeled after a system to prevent spam emails.

    [–]Wats0ns 0ポイント1ポイント  (1子コメント)

    Yes but fees are actually abut 0.01$, so a large attacker can make as much transactions as he wants if he has a huge amout of money (think about the enemies of Bitcoin, they have money)

    [–]ztsmart 0ポイント1ポイント  (0子コメント)

    And they will have to trade their money for Bitcoin in order to execute this type of attack. Once infected by the virus, it is only a matter of time until they defect to the Bitside.

    [–]coinaday 0ポイント1ポイント  (4子コメント)

    Bitcoin fees were modeled after a system to prevent spam emails.

    Well, yes and no. Bitcoin was modeled on hashcash. The idea was actually to use computational difficulty to prevent spam emails. The bitcoin concept is sort of like trading what would have been analogous to stamps in that system.

    So they weren't really doing fees per se. Instead, it was like a very low-difficulty version of mining which a person would do to make a "stamp", which iirc were thought of as one-time-use.

    [–]xcsler 0ポイント1ポイント  (3子コメント)

    I thought Bitcoin fees are supposed to prevent malicious actors from flooding the network with their transactions; the attack ends up being too costly. Is this incorrect?

    [–]coinaday 0ポイント1ポイント  (2子コメント)

    No, you're right.

    Your second sentence just claimed that the fees portion of Bitcoin was modeled on the spam system (hashcash), when I'm trying to add the sort of weird nuance that actually the core of making a bitcoin in the first place is based on hashcash, and that was how spam email would be prevent, but charging a transaction fee to prevent spam isn't really modeled after hashcash, because hashcash was about adding computational difficulty while transaction fees are about adding cost. They both have an anti-spam mechanism, but they do it differently.

    Does that make sense? It's a strange nuance to be making, I'm not sure if it's helpful, but I just thought I'd try throwing that out there.

    But yes, absolutely, transaction fees are partly intended to prevent spam. They're also partly intended as an incentive to the miners (currently far less than the block rewards; more intended for once the block rewards are insignificant).

    Another anti-spam/flooding mechanism bitcoin has are the anti-dust rules. I don't remember the specifics but there's a penalty for doing "too small" of transactions, even if the fee was the same, which is a little odd to me but was put in there because of something about how the dice sites operated and did a lot of dust transactions.

    [–]xcsler 0ポイント1ポイント  (1子コメント)

    OK, thanks for the clarification.

    Another anti-spam/flooding mechanism bitcoin has are the anti-dust rules. I don't remember the specifics but there's a penalty for doing "too small" of transactions, even if the fee was the same, which is a little odd to me but was put in there because of something about how the dice sites operated and did a lot of dust transactions.

    I found this in the Bitcoin Wiki but I'm not sure if it's what you're referring to:

    Relaying

    The reference implementation's rules for relaying transactions across the peer-to-peer network are very similar to the rules for sending transactions, as a value of 0.0001 BTC is used to determine whether or not a transaction is considered "Free". However, the rule that all outputs must be 0.01 BTC or larger does not apply. To prevent "penny-flooding" denial-of-service attacks on the network, the reference implementation caps the number of free transactions it will relay to other nodes to (by default) 15 thousand bytes per minute.

    [–]coinaday 0ポイント1ポイント  (0子コメント)

    However, the rule that all outputs must be 0.01 BTC or larger does not apply.

    That's referencing the part I'm talking about. The rest about the free transactions is separate. See this SO post. The part about not relaying transactions with outputs smaller than 546 satoshi is the tangent about dust I was talking about.

    [–]jimmajamma 5ポイント6ポイント  (0子コメント)

    I'm not sure if this is correct or not, but doesn't this also mean that clients have to be a bit smarter? For example won't a wallet need to either dynamically find the minimum fee that will ensure a transaction gets executed, or would have to allow a user to replace a prior transaction with one with a higher fee? Perhaps some do this already but I don't think I've seen such a feature in Mycelium for example.

    It also occurred to me that perhaps nodes (and miners?) would be opposed to a dynamic blocksize as an attack would then mean more bandwidth requirements. So someone could make it more expensive for all nodes by shuffling satoshis around. Am I looking at this incorrectly? Perhaps it would mean more home hosted nodes rather than cloud hosted ones, though bandwidth costs vary by region so that may not always be possible. 1.33Mbps/s in the US likely doesn't put a dent in someone's 100Mbps or come near any cap, but elsewhere or in the cloud it may double someone's monthly cost.

    I think this stress test was really valuable. Thank you to all involved for helping to add information into the discussion. I think we need more regular testing like this. Does anyone know how much testing like this has been done on the testnet and how relevant that is?

    [–]heldertb 9ポイント10ポイント  (12子コメント)

    Then wouldn't it be better to dynamically change the block size just like the diff? I mean, what if 20MB wouldn't be sufficient, even if it's in 20 years from now. And also, in 20 years, everyone's internet speed and storage will have increased a lot, people will have a theoretical download speed of 1Gb/s+ (maybe even way more) and have 5TB Hard drives in their $400 laptops. I think 20MB blocks will just move the problem. Making it dynamic could fix this problem forever

    [–]manginahunter -5ポイント-4ポイント  (11子コメント)

    You have a time machine ? Or it's just pure projection that in 20 years we will download Harry Potter in FullHD in 1 minute ?

    [–]152515 7ポイント8ポイント  (1子コメント)

    Seems like a very conservative estimate. Should be much faster than that.

    [–]heldertb 0ポイント1ポイント  (0子コメント)

    Thank you, I think I'm not exaggerating these numbers. Look at Google Fiber today. It's still expensive today, but in less than 5 years it won't be

    EDIT: btw, look at how much internet speed and storage have evolved in the last 20 years. It's not multiplying by 2. Also, making it dynamic and not setting another fixed size will be only 1 hard fork, imagine how much the people will be banging their heads against the wall when 20MB won't be enough and have to do another hard fork

    [–]jcdobber 6ポイント7ポイント  (3子コメント)

    I can already do that

    [–]manginahunter -1ポイント0ポイント  (2子コメント)

    Only in big cities.

    [–]EonShiKeno 0ポイント1ポイント  (0子コメント)

    I guess you have never heard of WISPs.

    [–]11111101000 0ポイント1ポイント  (0子コメント)

    The point is the technology exists and it just needs to be distributed instead of invented.

    [–]Huntred 1ポイント2ポイント  (2子コメント)

    Downloading an HD movie in under a minute is possible today in many areas.

    [–]manginahunter -1ポイント0ポイント  (1子コメント)

    Only in big cities...

    [–]no_game_player 0ポイント1ポイント  (0子コメント)

    Or in civilized countries like the Glorious Republic of Moldova!


    This pitch for Moldova has been brought to you by /r/MoldovanCrisis: we're as fanatical about Moldova as /r/bitcoin is about bitcoin.

    [–]NeilDeSnowden 0ポイント1ポイント  (1子コメント)

    Many places already have 1Gb/s+

    [–]manginahunter -1ポイント0ポイント  (0子コメント)

    In New York or Tokyo maybe, outside big cities not so much.

    [–]xygo 4ポイント5ポイント  (0子コメント)

    When will you be lobbying the miners to raise their soft blocksize limits ?

    [–]HamBlamBlam 4ポイント5ポイント  (6子コメント)

    What we need, in my opinion is an increase in blocksize and dynamically increasing fee's as the mempool becomes full. 20Mb blocks just make this attack 20x more expensive, which is still trivial to a true enemy of Bitcoin. We need fee's to automatically scale as the mempool becomes filled, so the fee is lower when the mempool is only 25% full, but fee's start to approach infinity at 99.9% full.

    So to secure the network against attack, regular users have to pay high enough fees that it's too expensive for someone to flood? Security through inefficiency?

    [–]GibbsSamplePlatter 1ポイント2ポイント  (1子コメント)

    It's almost like money is useful for signaling preferences.

    [–]xcsler 0ポイント1ポイント  (0子コメント)

    Right on, prices are useful. I thought the most interesting observation made by the OP was that "higher fee transactions were not delayed".

    Also, eventually all of the transactions cleared. Wasn't this something that Mike Hearn was worried about? Something about the mempool? I'd assume that if people's transactions were stuck they would resubmit them with a higher fee. It seems like this whole blocksize issue is more philosophical than technical. I don't quite understand all the nuances of Bitcoin's internal plumbing but is my general understanding correct?

    [–]tmornini 0ポイント1ポイント  (0子コメント)

    Yes! That is the core security model of Bitcoin, and cryptography in general.

    [–]smartfbrankings -1ポイント0ポイント  (2子コメント)

    This is how shortages are resolved constantly. Except when governments get involved and we end up with water shortages. This is why people end up creating agricultural centers in deserts while there isn't enough drinking water.

    [–]HamBlamBlam -1ポイント0ポイント  (1子コメント)

    Except this shortage is completely artificial. Centralized payment networks can use competition and innovation to increase efficiency and lower costs. This proposal condemns Bitcoin to maintain a high level of inefficiency for the sake of security. How about solving the scalability problems themselves that make the network vulnerable to trivial amounts of flooding?

    [–]smartfbrankings 2ポイント3ポイント  (0子コメント)

    It's not artificial. There are bandwidth, CPU, and storage constraints on anyone running a node. Larger blocks increase these requirements.

    You say condemn, I say enhance.

    I agree solving the attacks is a good idea. You know what would stop any innovation on those solution? Bumping the block size.

    [–]xygo 7ポイント8ポイント  (15子コメント)

    We need fee's to automatically scale as the mempool becomes filled

    Whose mempool ? And how do you know with certainty how full it is ?

    [–]petertoddPeter Todd - Bitcoin Expert 10ポイント11ポイント  (11子コメント)

    There is no limit in Bitcoin Core on mempool size. Of course, at some point nodes run out of memory, but few if any attackers seem to be willing to spend the fees required to pull off that attack. (~$2500/GB) Secondly, it's somewhat tricky to do that kind of attack in a way where the transactions actually propagate sufficiently to make the attack an issue. (no, I'm not going to tell you how...)

    edit: fix typo

    [–]jalgroy 6ポイント7ポイント  (2子コメント)

    (~$2500k/GB)

    $2500 or $2.5 million?

    [–]petertoddPeter Todd - Bitcoin Expert 5ポイント6ポイント  (1子コメント)

    ha, yeah, $2500, or $2.5k... I'll let you guess the source of that typo. :)

    [–]btcdrak 0ポイント1ポイント  (0子コメント)

    We could say the sky is falling for quite a few things in bitcoin then perform a "stress test" to prove it needs to be "fixed". We already know what happens when you flood the network with spam transactions that do not have the sufficient fee.

    [–]coinaday 0ポイント1ポイント  (0子コメント)

    (no, I'm not going to tell you how...)

    Awww. But...but...that's security through obscurity! ;-p

    [–]bitsko 3ポイント4ポイント  (2子コメント)

    via bitcoin core console or bitcoind type:

    getmempoolinfo

     { "size" : 1582, "bytes" : 1539771 }

    [–]xygo 2ポイント3ポイント  (1子コメント)

    That's fine if you are running those apps. And I suppose you have to leave them running for a while so that the txs can build up, before you could send a payment. So basically running a full node.
    What about ligtweight clients like Electrum or multibit ? I suppose they would have to pull the value from an SPV server. That's one more thing to trust - a malicious server could tell you the fee is 1BTC per transaction.

    [–]bitsko 4ポイント5ポイント  (0子コメント)

    Definitely the less you do on your own, the more you rely on others.

    I'll leave this link here for others.

    http://statoshi.info/

    [–]Noosterdam 7ポイント8ポイント  (20子コメント)

    I think this shows that the blocksize wait-and-see crew is right about one thing: the economization that happens via miners setting various policies and soft limits and fee prices when blocks get full is a necessary step in the antifragile process of adaptation within the ecosystem.

    However, the argument that this should happen before an increase of the blocksize cap rather than after remains questionable. In other words, it seems that economization norms and pricing and best practices are going to form as needed anyway, so what is the additional argument that this should happen at 1MB first rather than at 20MB?

    [–]cypherdoc2 4ポイント5ポイント  (14子コメント)

    the problem with core devs deciding any block size limit is that they essentially become central bankers, ie, unilaterally determining everyone's cost of accessing the network. by what metric can they ever determine what size is appropriate given most of Bitcoin's incentives are economically and game theoretically driven? better to let the free mkt of miners and users work that out via mutually determined fees.

    [–]yeh-nah-yeh 12ポイント13ポイント  (4子コメント)

    With no block size limit at all then

    [–]mustyoshi -1ポイント0ポイント  (3子コメント)

    With no block size limit, we will likely see small blocks, miners have ever incentive to limit block size so that fees will go up.

    [–]eyal0 3ポイント4ポイント  (2子コメント)

    Mining a small block is just as easy as mining a large block so better to mine the large one and get more fees paid.

    [–]WinkleviBitcoinTrust 0ポイント1ポイント  (0子コメント)

    Small blocks propagate faster, increasing odds that it will not be orphaned compared to a larger block.

    [–]mustyoshi -2ポイント-1ポイント  (0子コメント)

    Yes but if all the miners agreed to mine small blocks, the average fee would have to go up. Even greedy miners would have to understand that if they colluded they would make more money.

    [–]Big_Man_On_Campus 6ポイント7ポイント  (5子コメント)

    Central bankers don't set banking fees, they control the size of the pool of currency and debt.

    [–]cypherdoc2 3ポイント4ポイント  (4子コメント)

    the analogy is a small group of ppl, in this case core devs, are deciding how much it will cost for participants to engage with Bitcoin and how much miners deserve to get paid in tx fees. how can any small group of ppl make those decisions?

    tx fees should be directly negotiated by the broadest number of participants in the mkt; in this case btwn the users and the miners. and don't think that miners won't take into consideration the technical limitations of the network. they will have to.

    [–]bitsko 4ポイント5ポイント  (0子コメント)

    I think it's important to note that the core devs are only capable of setting the ceiling, and that miners, able to change the default up to the ceiling have more of a say than if it was only max blocksize.

    [–]brainchrist 4ポイント5ポイント  (0子コメント)

    They don't get to dictate anything though, the only get to try to establish consensus within the community. If they implement something that is horrible for bitcoin or more directly the miners, people are free to run other implementations besides bitcoin core. There may be a fork or other bad problems, but as it is the devs don't get to 100% decide anything.

    [–]Big_Man_On_Campus 1ポイント2ポイント  (1子コメント)

    I agree, and I see an inherent problem with blockchain technology, the dynamic range of bitcoin transaction size versus bytes transaction size.

    Right now, bitcoin works just fine for all transaction sizes, because the network utilization is relatively light. But that utility dies when it becomes impacted and fees have to rise just to get a reasonable confirmation time. When the network gets impacted, only large-value transactions make sense, because only the fees paid on large transactions (relative to transaction size) are what the miners will care about. Miners will completely ignore the small-bitcoin, small-fee, but 3-4 address transaction with multisig. They are right to do this.

    Whatever solution is come up with needs to continue to allow the small-value transactions to be confirmed in a reasonable time, while letting the big transactions set their own priority with fees.

    I think it would be nice to have something like a fee-rating system feedback loop for wallets. I'm actually surprised that Gavin hasn't done something like this for blockchain.info already. Basically just a quick read-out as to what fee levels are getting confirmed in what time period, dynamically read and displayed as blocks are mined. In order to have a fee market (or any kind of market), you have to have price discovery. Right now there is no price-discovery on fees.

    [–]gavinandresenGavin Andresen - Bitcoin Expert 9ポイント10ポイント  (0子コメント)

    [–]Noosterdam 1ポイント2ポイント  (2子コメント)

    Absolutely the market should decide. The only question in my mind is if removing the cap altogether is too risky/sudden to enable the economics to play out and mature. Natural orders, of which a free market is one example, are exceptionally effective at allocating resources efficiently and adaptably, but they take time to develop. Some kind of phasing in is preferable in a system that has to be "always on" like Bitcoin, though 20x-ing the cap in a single fork should serve as quite a bit of phasing in.

    [–]cypherdoc2 1ポイント2ポイント  (0子コメント)

    i hear you.

    but i think this is a situation in which you have to trust in the economics of self preservation being employed by miners. if the cap was lifted entirely, miners won't mine long term at a loss. they will be forced in aggregate to mine profitably, probably at utility sized rates (as banking should ideally be). nothing like losing money to encourage swift changes to one's economic model.

    even with no cap, miners should construct smallish blocks mainly to capture block rewards, as fees will still be insignificant. over the years, a smooth transition should take place. nothing sudden.

    i don't find it personally helpful to focus so much on edge attacks by non-economic miners as this project is open source with a fixed supply of BTC which incentivizes proper behavior which overwhelms bad behavior.

    [–]paperraincoat 1ポイント2ポイント  (0子コメント)

    Yeah, I always thought the block size should double with each reward halving. It just seemed... cleaner that way.

    [–]sroose 4ポイント5ポイント  (2子コメント)

    I think giving some more attention to transaction priority might solve this problem for a large part.

    In Bitcoin, tx priority is very similar to "stake" in altcoins that use proof-of-stake mining. Priority increases with the age of an output. This means that "bad actors" will need need a whole lot of old money to attack the system. If someone wants to clog the system with a lot of transactions, the will need a lot of old outputs with a decent amount of coins in them. Because once an output is spent, the coin age is lost.

    [–]PotatoBadger 0ポイント1ポイント  (1子コメント)

    The age priority is in no way part of the actual protocol. It relies on miners honoring the practice, and should not be assumed to continue.

    [–]sroose 1ポイント2ポイント  (0子コメント)

    What we need, in my opinion is an increase in blocksize and dynamically increasing fee's as the mempool becomes full.

    My comment was in reaction to this statement. It will never be possible to use the size of the mempool in the core protocol, because different implementations might have different pool sizes and not all nodes necessarilly receive the same transactions.

    Tx priority on the other hand, can be calculated unambiguously on any machine. So in theory, adapting fee requirements based on tx priority is possible and makes sense.

    [–]patrikr 3ポイント4ポイント  (0子コメント)

    No apostrophe in the word "fees"! Also, megabytes is abbreviated "MB", Mb means megabits.

    [–]futilerebel 3ポイント4ポイント  (3子コメント)

    We need surge pricing like Uber! That way when there's lots of people making transactions, people who have an urgent need can pay more. But wallets need to tell you what the necessary fee is likely to get your transaction included in the next block.

    [–]coinslists 2ポイント3ポイント  (2子コメント)

    That's a novel idea, and it could definitely work.

    Miners might be motivated to spam the network to demand higher fees. Game Theory and all.

    Surge pricing usually sucks unless you are the one benefiting from the surge.

    Increasing capacity is a better solution.

    [–]luddist 0ポイント1ポイント  (0子コメント)

    Supply and demand isn't novel.

    [–]futilerebel 0ポイント1ポイント  (0子コメント)

    I mean, in a way, transaction fees are already surge pricing. The difference is that 1) Uber locks in the duration it will take for your driver to arrive as well as the price once you confirm your request, whereas bitcoin miners can always toss out your transaction if new transactions start coming in with higher fees, with no penalty to the miner, and 2) Uber tells you what the current multiplier is, whereas I don't know of any wallets (perhaps they exist) that keep track of the mempool to figure out what fee is likely to get your transaction confirmed quickly during heavy usage.

    I've used surge pricing and it's never "sucked".. it was always worth exactly what I paid. If you don't want to pay, don't.

    That being said, the artificial 1 MB block size limit is way too small to start surge pricing, as, according to Gavin's testing, 20 MB blocks should be handled just fine, even by individual miners. And it'll kill usability unless you get the confirmation time you paid for.

    [–]endersodium 10ポイント11ポイント  (10子コメント)

    Actually the peak yesterday had approx 26,000 unconfirmed txs at a single point. They need to increase the blocksize. I think the connection speed won't be an issue. Even if they have 100mb blocksize, 1.33megabits/s will only be a tiny amount of your whole bandwidth(approx 0.17mbps).

    [–]natodemon 1ポイント2ポイント  (3子コメント)

    Block size increase is a definite yes. Just to avoid a bit of confusion the value for 100 MB blocks would be 1.33 megabits per second, I.e. the same as what your internet bandwidth is measured in (1.33 Mbps) 0.17 would be Megabytes per second. Its still a small fraction of what most homes receive now a days and an insignificant amount at server farms. I can see very few reasons not to increase the block size as soon as possible.

    [–]whitslack 2ポイント3ポイント  (2子コメント)

    1.33 megabits per second, [...] Its still a small fraction of what most homes receive now a days

    For every block you receive, you may have to transmit that block many times. 1.33 Mbps is small compared to common downstream limits, but five times 1.33 Mbps exceeds the upstream limits of most residential broadband connections.

    [–]natodemon 1ポイント2ポイント  (1子コメント)

    That is very true, most upload rates are much lower than download rates when it comes to home internet connections, I hadn't thought of that.

    However I highly doubt that the network would jump from 1 MB blocks to 100 MB ones, and I'd imagine the transfer rates for a 20 MB block would be much more manageable for home broadband connections.

    [–]whitslack 0ポイント1ポイント  (0子コメント)

    The actual block size will be whatever gives miners the greatest competitive edge. It has nothing to do with how much use of Bitcoin there is. Miners can refuse to include transactions if smaller blocks are better, and they can fill blocks with garbage data (up to the limit) if larger blocks are better.

    [–]coinaday 0ポイント1ポイント  (0子コメント)

    Even if they have 100mb blocksize, 1.33megabits/s will only be a tiny amount of your whole bandwidth(approx 0.17mbps).

    Check your broadband privilege.

    [–]hellobitcoinworld 0ポイント1ポイント  (4子コメント)

    But increasing the block size only makes this attack 20x more expensive. It's still a very realistic attack to pull off.

    [–]agentcash 6ポイント7ポイント  (3子コメント)

    It cost me ~2.4 BTC in fees which all went to miners. 20x more expensive would make it an idiotic method of attack, as they would be funding the very people working to make bitcoin secure and who have a financial interest in keeping it resilient.

    [–]coinaday 0ポイント1ポイント  (2子コメント)

    It cost me ~2.4 BTC in fees which all went to miners.

    Impressive. You really wanted to test this, eh?

    [–]agentcash 1ポイント2ポイント  (1子コメント)

    It's been a concern of mine for a while. I had hoped there would be more participation, so that we'd really get a taste of how bad things will get during the next bubble.

    I think the bitcoin network handled it quite well which might not be such a good thing for those of use wanting increased block size asap.

    [–]coinaday 1ポイント2ポイント  (0子コメント)

    Aye, I can totally understand that perspective.

    I'm heavily divested into altcoins, so I'm pretty unconcerned. If bitcoin goes ahead and tries a hardfork, cool. If it doesn't, cool. I think cryptocurrencies as whole will muddle through.

    By comparison, on the little altcoin that I've been mostly focusing on, right now we're going through a phase where we often are hitting 12+ hour gaps between blocks despite a 1-minute target, because of very inconsistent hash rates. And we're growing through it (in fact, it's a side-effect of going from ~worthless to valuable enough to mine intermittently).

    I think these systems are far more resilient than people expect because of the determination of the people behind them to make them work. Frankly, I think bitcoin will survive and thrive whether or not it increases the blocksize in the immediate future.

    [–]AmIHigh 2ポイント3ポイント  (0子コメント)

    As far as I understand we didn't even reach the point of the mempool getting to full causing a cascade of client crashes/clears/rebroadcasting?

    It could be worse if it happened for real at a sustained length

    [–]crimi666 4ポイント5ポイント  (0子コメント)

    All the countless blog entrys werent close as interesting as this live test. Sorry gavin.

    We need fee's to automatically scale as the mempool becomes filled.

    This will probably rise the topic again about to keep the poor out of bitcoin.

    [–]jstolfi 6ポイント7ポイント  (32子コメント)

    We need fee's to automatically scale as the mempool becomes filled, so the fee is lower when the mempool is only 25% full, but fee's start to approach infinity at 99.9% full.

    That sounds great from a programmer's perspective, but is terrible as a business policy. A service with a broad clientele must charge according to the service rendered, not according to its internal costs and constraints. And the price must be known in advance. And there must be some minimum service guarantee, even if only probabilistic.

    [–]CLSmith15 1ポイント2ポイント  (3子コメント)

    I don't know if that's necessarily true. Law of demand states that as demand increases, price increases. And internal costs often dictate the price to consumers, especially in commodity markets.

    [–]jstolfi 0ポイント1ポイント  (2子コメント)

    And internal costs often dictate the price to consumers, especially in commodity markets.

    The price of a service must be adjusted so that the total revenue covers the total cost plus suitable profit. But the price charged to each customer must be proportional to the service rendered, as perceived by the customer; and not the cost of serving that particular customer.

    "Waiter, my friend here and I both had the same dish, how come my bill is 5$ more than his?"

    "The cook told me to tell you. While he was preparing the dishes, somebody sneezed, and that meatball there rolled off the table on to the floor, then out of the door, then through the garden and under a bush. A kid found it and brought it back, and I had to give him a 5$ tip. That is why."

    [–]CLSmith15 0ポイント1ポイント  (1子コメント)

    I see your point. I think I'm not really clear on how the transaction fee works. If you pay a higher fee, aren't you inherently getting a different service? Aren't you paying for quicker confirmations?

    [–]jstolfi 2ポイント3ポイント  (0子コメント)

    The service that the client receives is the confirmation of his transaction request.

    One problem is that the system does not give him any guarantee of if and when that will happen; even if his is the only transaction in the queue, and has a generous fee, it may not be included in next 20 blocks. That is one problem.

    Another problem is the perceived injustice and arbitrariness of the system with regard to the fee. Why should he pay more today than he paid yesterday, to get the same service? Why should he have to pay 50 cents, while his competitor gets the same service for 5?

    Finally there is the unpredictability. In the bitcoin fee model, the client has no way of knowing how much fee he should include. Even after submitting the transaction request, he cannot tell whether the fee will be sufficient to get through. If the transaction is not picked up by the next block, he is not told why, and does not know what to do: just wait, or increase the fee? When he finally decides for the latter, he again has to guess how much more to pay (and must submit a second transaction, since there is no way to cancel or modify the outstanding one.) And then he will go back to the same indefinite waiting state. And when the transaction is finally included in a block, he may notice that other transactions got in paying smaller fees; so he feels robbed or stupid for paying too much.

    This fee model would be totally inviable in any business with a broad clientele. It only seems to work with bitcoin because (so far) the fees are practically zero, the network was not saturated, and there are enough miners who actually try to confirm transactions.

    [–]AndreKoster 0ポイント1ポイント  (3子コメント)

    A service with a broad clientele must charge according to the service rendered, not according to its internal costs and constraints.

    That's not how airlines work, to name an example.

    [–]jstolfi -1ポイント0ポイント  (2子コメント)

    As noted in my other respose, the airlines (like any business) must make sure that their total revenue covers their total cost. Otherwise they strive to charge consistent fees depending on the origin, destination, time of day, number of stops and transfers, and other factors that the client can perceive as quality of the service that he gets. It is not uncommon for them to charge the same price when the flight is half-empty and when it is full.

    More importantly, their clients know beforehand how much the flight will cost, and what they will get for that money. They don't have to guess a price, and hope that it will be enough to get them on the next plane -- and get no refund if they paid too much, or if the next plane takes off without picking up any passengers.

    [–]AndreKoster 0ポイント1ポイント  (1子コメント)

    So in your example with the meatball dish, all is fine if the $5 markup is communicated before ordering. Because that's what airlines do, you can be pretty sure that the person sitting next to you has paid a different price.

    Note, my point was regarding "A service with a broad clientele must charge according to the service rendered, not according to its internal costs and constraints". Airlines notoriously charge according to internal costs and constraints.

    [–]jstolfi 0ポイント1ポイント  (0子コメント)

    So in your example with the meatball dish, all is fine if the $5 markup is communicated before ordering. Because that's what airlines do, you can be pretty sure that the person sitting next to you has paid a different price.

    It is acceptable, but not ideal, to charge different prices for clients who order the service at different times, or through different agents. As long as (as you say) the client is told the price before purchasing, and the service is guaranteed after purchase with no extra payments. That implies first-come-first-serve seating, for example.

    [–]Chytrik 0ポイント1ポイント  (0子コメント)

    This idea won't mitigate against attacks fully either, so I don't think we need to worry about it being implemented! :p

    [–]googlemaster1 1ポイント2ポイント  (0子コメント)

    I could see this as a potential problem for mining pills most especially. A scenario I have in my head is that, what if mining pools themselves generate a bunch of bogus transactions in order to get this unconfirmed status up really high, assuring feeless transactions are not propagated. Right now the block reward is more significant, but this could put a 1-5 year timeline (less if bitcoin has more significant price action) on getting this done. We shouldn't depend on this being "difficult" to pull off, but come up with a more streamlined solution. Side chains still seem like an interesting way to handle micro transactions to mitigate this potential even further. Couple that with block size increase and we've made the network more resilient, but part of me just feels that those are intrinsically both sub par solutions to the problem, despite me thinking both will be an eventuality anyway...

    [–]gamerjammer 1ポイント2ポイント  (0子コメント)

    Good Job op for coming up with the stress test etc.

    [–]bitdoggy 2ポイント3ポイント  (0子コメント)

    Why does everyone talk about an attack? There will be just thousands of new transactions because of increase of interest in bitcoin. The trigger can be anything - see last three major rallies and it can happen in days, weeks, months...

    [–]lechango 1ポイント2ポイント  (0子コメント)

    Ok, so honest question here, please don't downvote to oblivion. Are other coins with faster block times like LTC any less prone to network flooding? If so, why?

    [–]bitskeptic 1ポイント2ポイント  (0子コメント)

    If we want full blocks then we can't stop accepting transactions into the mempool when it reaches the max block size. Blocks don't come every 10 mins exactly, so you need larger mempools to buffer them so that blocks can be full (even if there are two in quick succession). Also it doesn't make sense that transaction fees should suddenly become cheaper the moment a block is mined.

    We should just let each node decide how big it wants its mempool to be, and once it's full then can kick out transactions based on fee.

    [–]goonsack 1ポイント2ポイント  (0子コメント)

    Under your proposal, wouldn't mining pools be incentivized to send a bunch of non confirming transactions whenever the mempool gets too small? To increase the size of the mempool and extract more fees?

    [–]btcbible 1ポイント2ポイント  (1子コメント)

    Great info and suggestions. Thanks! /u/changetip $1

    [–]changetip 0ポイント1ポイント  (0子コメント)

    The Bitcoin tip for 4,530 bits ($1.00) has been collected by 45sbvad.

    what is ChangeTip?

    [–]btcdrak 1ポイント2ポイント  (0子コメント)

    It looks like a lot of pools have much lower caps than 750kb according to the graph posted by gavin.

    They successfully proved that the network does not confirm txs with insufficient fees in the first block...

    [–]blackcoinprophet 1ポイント2ポイント  (0子コメント)

    You left out the most important bit of information. How much this stress test cost for you handful of individuals to pull off?

    This is a pretty damn big deal if a small group can cheaply clog up the entire network.

    [–]ProHashing 1ポイント2ポイント  (2子コメント)

    Agreed. These tests are demonstrating that a one-time increase will be a mistake of epic proportions. The solution must be dynamic, whether it be a time-based function or a load-based function.

    [–]jstolfi 6ポイント7ポイント  (1子コメント)

    From a programming viewpoint, a dynamic limit is the same as no limit at all.

    The purpose of a static limit is to tell every player (miner, node, client) what is the largest block that he must prepared to handle, and tell every miner what is the largest block that he can put out without choking other players. With an explicit max block size, programmers can use simpler, more efficient, and more reliable data structures (e.g. static monolihic buffers, instead of dynamically allocated and segmented ones). They can also tell users how much memory they need to have in order to run the software, and guarantee that it will not run out of memory in that case.

    A static max block size is also a security measure. Suppose that 10% of the nodes are known to be running on computers that have only 8 GB of memory and no swap memory. Then a malicious miner with (say) 1% of the total hashpower could try to mine blocks with 9 GB of bogus transactions. After about 100 trials he will succeed. Then those 10% of the nodes will crash, while the other 90% will acept the 9GB block as valid. From then on, those 10% will be permanently unable to process the blockchain, unless their memory is physically expanded.

    Note that the above risk does not exist when there is a hard limit (say, 20 MB) to the block size. If a rogue miner issues a block larger than the limit, that block will be rejected by all other players, as soon as they receive the first 20 MB, and no player should choke on it.

    [–]manginahunter 1ポイント2ポイント  (0子コメント)

    No limit will lead to centralization and weakening the network...

    [–]danster82 1ポイント2ポイント  (0子コメント)

    Great so a newbie loses his account balance in a fee when paying for a coffee.

    [–]n1nj4_v5_p1r4t3 0ポイント1ポイント  (0子コメント)

    Awesome recap!

    [–]_niko 0ポイント1ポイント  (0子コメント)

    Point #4 is meaningless, as it only hints at confused coding and presentation of the blockchain.info Website. Nothing to do with Bitcoin network.

    [–]bitsteiner 0ポイント1ポイント  (0子コメント)

    Why not add a metric for blocksize and fee similar to difficulty. Both can go up and down, depending on transaction traffic?

    [–]mughat 0ポイント1ポイント  (0子コメント)

    approach infinity

    Infinity? How about just 21 million?

    [–]locster 0ポイント1ポイント  (0子コメント)

    What we need, in my opinion is an increase in blocksize and dynamically increasing fee's as the mempool becomes full

    Not sure I agree with that. Any transactor can look at the recent history of transaction volume and fees and determine what tx speed is gained for a given fee (typically). So the transactor wanting to achieve high speed needs to set a fee about or higher than the recent highest, that's just supply ad demand. Supply here is space on each block which is is fixed, so as demand exceeeds supply the tx fees increase to reduce demand. If demand is inelastic then fees get stupidy expensive.

    [–]arivar 0ポイント1ポイント  (3子コメント)

    20Mb blocks just make this attack 20x more expensive

    Since blocks already have an average of about 500kb and the limit is 1mb, this attack would have cost 40x more with a limit of 20mb.

    [–]smartfbrankings 0ポイント1ポイント  (2子コメント)

    This isn't quite true. We would likely have lower fees with larger blocks, as there isn't as much contention for space. So you pull this off with a lot less money, since miners could include you with really tiny fees with virtually no cost to themselves.

    [–]arivar 0ポイント1ポイント  (1子コメント)

    Well, but the attack cant be effective this way, since miners would prioritize normal transactions with standard fees. So the only transactions that would be delayed by the network would be those of the attackers.

    [–]smartfbrankings 0ポイント1ポイント  (0子コメント)

    Miners and other attempted freeloaders.

    [–]jeorgen -1ポイント0ポイント  (1子コメント)

    "20Mb blocks just make this attack 20x more expensive, which is still trivial to a true enemy of Bitcoin"

    No, that is not trivial. A 10% increase may be trivial, but a change of 20x is larger than one order of magnitude, and is not trivial.

    Attacks that are just feasible with a 1MB limit are impossible with a 20MB limit. If you can get an order of magnitude somewhere in your code, as a programmer you are very happy, and that magnitude will help with all other kinds of defenses.

    All other things equal an attacker must spend 20x the money, which if it goes from say $200,000,000 to $4,000,000,000 is a rather big difference.

    [–]ProHashing -1ポイント0ポイント  (0子コメント)

    But that's not how much it actually costs. In the first test, people were spending $40 each.