You can subscribe to this list here.
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(81) |
Jul
(186) |
Aug
(166) |
Sep
(185) |
Oct
(94) |
Nov
(85) |
Dec
(217) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2012 |
Jan
(135) |
Feb
(131) |
Mar
(70) |
Apr
(62) |
May
(87) |
Jun
(139) |
Jul
(128) |
Aug
(39) |
Sep
(58) |
Oct
(82) |
Nov
(77) |
Dec
(90) |
2013 |
Jan
(21) |
Feb
(51) |
Mar
(143) |
Apr
(152) |
May
(200) |
Jun
(167) |
Jul
(193) |
Aug
(163) |
Sep
(93) |
Oct
(199) |
Nov
(247) |
Dec
(230) |
2014 |
Jan
(289) |
Feb
(196) |
Mar
(489) |
Apr
(693) |
May
(280) |
Jun
(240) |
Jul
(227) |
Aug
(189) |
Sep
(76) |
Oct
(189) |
Nov
(106) |
Dec
(119) |
2015 |
Jan
(172) |
Feb
(372) |
Mar
(141) |
Apr
(94) |
May
(126) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Thy Shizzle <thyshizzle@ou...> - 2015-05-08 04:12:28
|
------------------------------------------------------------------------------ One dashboard for servers and applications across Physical-Virtual-Cloud Widest out-of-the-box monitoring support with 50+ applications Performance metrics, stats and reports that give you Actionable Insights Deep dive visibility with transaction tracing using APM Insight. http://ad.doubleclick.net/ddm/clk/290420510;117567292;y |
From: Peter Todd <pete@pe...> - 2015-05-08 03:41:34
|
On Thu, May 07, 2015 at 03:31:46PM -0400, Alan Reiner wrote: > We get asked all the time by corporate clients about scalability. A > limit of 7 tps makes them uncomfortable that they are going to invest > all this time into a system that has no chance of handling the economic > activity that they expect it handle. We always assure them that 7 tps > is not the final answer. Your corporate clients, *why* do they want to use Bitcoin and what for exactly? -- 'peter'[:-1]@petertodd.org 0000000000000000054c9d9ae1099ef8bc0bc9b76fef5e03f7edaff66fd817d8 |
From: Cameron Garnham <da2ce7@gm...> - 2015-05-08 03:12:41
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 While being in the Bitcoin community for a long time, I haven't been so directly involved in the development. However I wish to suggest a different pre-hard-fork soft-fork approach: Set a 'block size cap' in the similar same way as we set difficulty. Every 2016 blocks take the average size of the blocks and multiply the size by 1.5x, rejecting blocks that are larger than this size, for the next 2016 period. I would of-course suggest that we keep the limits at min 100kb and max (initially) 990kb (not 1mb on purpose, as this should become the new limit), rounding up to the nearest 10kb. A: we don't have pressure at the 1mb limit, (we reduce the limit in a flexible manner to 990kb). B: we can upgrade the network to XYZ hard-limit, then slowly raze the soft-limit after being sure the network, as-a-whole is ready. If we on-day remove the block-size limit, this rule will stop a rouge miner from making 10mb, or 100mb blocks, or 1gb blocks. This could be implemented by the miners without breaking any of the clients, and would tend to produce a better dynamic fee pressure. This will give the mechanics to the miners to create consensus to agree what block-sizes they believe are best for the network, and allows the block-sizes to dynamically grow in response to larger demand. On 5/8/2015 10:35 AM, Pieter Wuille wrote: > On May 7, 2015 3:08 PM, "Roy Badami" <roy@...> wrote: >> >> On Thu, May 07, 2015 at 11:49:28PM +0200, Pieter Wuille wrote: >>> I would not modify my node if the change introduced a perpetual >>> 100 BTC subsidy per block, even if 99% of miners went along >>> with it. >> >> Surely, in that scenario Bitcoin is dead. If the fork you prefer >> has only 1% of the hash power it is trivially vulnerably not just >> to a 51% attack but to a 501% attack, not to mention the fact >> that you'd only be getting one block every 16 hours. > > Yes, indeed, Bitcoin would be dead if this actually happens. But > that is still where the power lies: before anyone (miners or > others) would think about trying such a change, they would need to > convince people and be sure they will effectively modify their > code. > > > > ---------------------------------------------------------------------- - -------- > > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable > Insights Deep dive visibility with transaction tracing using APM > Insight. http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > > > > _______________________________________________ Bitcoin-development > mailing list Bitcoin-development@... > https://lists.sourceforge.net/lists/listinfo/bitcoin-development > -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iF4EAREIAAYFAlVMKZYACgkQBJ8cMDO159aTiQEApTITEBrhE1DRbj/w+GncNeqB 0hGvmIBa1z0hGww0kaMBAOhxjn/K5leRJgdt1fKhNEDKKHdeCOIX3QRgry90D3NO =p0+H -----END PGP SIGNATURE----- |
From: Pieter Wuille <pieter.wuille@gm...> - 2015-05-08 02:35:08
|
On May 7, 2015 3:08 PM, "Roy Badami" <roy@...> wrote: > > On Thu, May 07, 2015 at 11:49:28PM +0200, Pieter Wuille wrote: > > I would not modify my node if the change introduced a perpetual 100 BTC > > subsidy per block, even if 99% of miners went along with it. > > Surely, in that scenario Bitcoin is dead. If the fork you prefer has > only 1% of the hash power it is trivially vulnerably not just to a 51% > attack but to a 501% attack, not to mention the fact that you'd only > be getting one block every 16 hours. Yes, indeed, Bitcoin would be dead if this actually happens. But that is still where the power lies: before anyone (miners or others) would think about trying such a change, they would need to convince people and be sure they will effectively modify their code. -- Pieter |
From: Adam Back <adam@cy...> - 2015-05-08 02:29:12
|
Well this is all very extreme circumstances, and you'd have to assume no rational player with an interest in bitcoin would go there, but to play your analysis forward: users are also not powerless at the extreme: they could change the hash function rendering current deployed ASICs useless in reaction for example, and reset difficulty at the same time, or freeze transactions until some minimum hashrate is reached. People would figure out what is the least bad way forward. Adam On May 7, 2015 3:09 PM, "Roy Badami" <roy@...> wrote: > On Thu, May 07, 2015 at 11:49:28PM +0200, Pieter Wuille wrote: > > I would not modify my node if the change introduced a perpetual 100 BTC > > subsidy per block, even if 99% of miners went along with it. > > Surely, in that scenario Bitcoin is dead. If the fork you prefer has > only 1% of the hash power it is trivially vulnerably not just to a 51% > attack but to a 501% attack, not to mention the fact that you'd only > be getting one block every 16 hours. > > > > > A hardfork is safe when 100% of (economically relevant) users upgrade. If > > miners don't upgrade at that point, they just lose money. > > > > This is why a hashrate-triggered hardfork does not make sense. Either you > > believe everyone will upgrade anyway, and the hashrate doesn't matter. Or > > you are not certain, and the fork is risky, independent of what hashrate > > upgrades. > > Beliefs are all very well, but they can be wrong. Of course we should > not go ahead with a fork that we believe to be dangerous, but > requiring a supermajority of miners is surely a wise precaution. I > fail to see any realistic scenario where 99% of miners vote for the > hard fork to go ahead, and the econonomic majority votes to stay on > the blockchain whose hashrate has just dropped two orders of magnitude > - so low that the mean time between blocks is now over 16 hours. > > > > > And the march 2013 fork showed that miners upgrade at a different > schedule > > than the rest of the network. > > On May 7, 2015 5:44 PM, "Roy Badami" <roy@...> wrote: > > > > > > > > > On the other hand, if 99.99% of the miners updated and only 75% of > > > > merchants and 75% of users updated, then that would be a serioud > split of > > > > the network. > > > > > > But is that a plausible scenario? Certainly *if* the concensus rules > > > required a 99% supermajority of miners for the hard fork to go ahead, > > > then there would be absoltely no rational reason for merchants and > > > users to refuse to upgrade, even if they don't support the changes > > > introduces by the hard fork. Their only choice, if the fork succeeds, > > > is between the active chain and the one that is effectively stalled - > > > and, of course, they can make that choice ahead of time. > > > > > > roy > > > > > > > > > > ------------------------------------------------------------------------------ > > > One dashboard for servers and applications across > Physical-Virtual-Cloud > > > Widest out-of-the-box monitoring support with 50+ applications > > > Performance metrics, stats and reports that give you Actionable > Insights > > > Deep dive visibility with transaction tracing using APM Insight. > > > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > > > _______________________________________________ > > > Bitcoin-development mailing list > > > Bitcoin-development@... > > > https://lists.sourceforge.net/lists/listinfo/bitcoin-development > > > > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Bitcoin-development mailing list > Bitcoin-development@... > https://lists.sourceforge.net/lists/listinfo/bitcoin-development > |
From: Jeff Garzik <jgarzik@bi...> - 2015-05-08 02:10:10
|
On Thu, May 7, 2015 at 9:40 PM, Tom Harding <tomh@...> wrote: > On 5/7/2015 12:54 PM, Jeff Garzik wrote: > > 2) Where do you want to go? Should bitcoin scale up to handle all the > > world's coffees? > > Alan was very clear. Right now, he wants to go exactly where Gavin's > concrete proposal suggests. > G proposed 20MB blocks, AFAIK - 140 tps A proposed 100MB blocks - 700 tps For ref, Paypal is around 115 tps VISA is around 2000 tps (perhaps 4000 tps peak) I ask again: where do we want to go? This is the existential question behind block size. Are we trying to build a system that can handle Paypal volumes? VISA volumes? It's not a snarky or sarcastic question: Are we building a system to handle all the world's coffees? Is bitcoin's main chain and network - Layer 1 - going to receive direct connections from 500m mobile phones, broadcasting transactions? We must answer these questions to inform the change being discussed today, in order to decide what makes the most sense as a new limit. Any responsible project of this magnitude must have a better story than "zomg 1MB, therefore I picked 20MB out of a hat" Must be able to answer /why/ the new limit was picked. As G notes, changing the block size is simply kicking the can down the road: http://gavinandresen.ninja/it-must-be-done-but-is-not-a-panacea Necessarily one must ask, today, what happens when we get to the end of that newly paved road. -- Jeff Garzik Bitcoin core developer and open source evangelist BitPay, Inc. https://bitpay.com/ |
From: Joel Joonatan Kaartinen <joel.kaartinen@gm...> - 2015-05-08 01:51:43
|
Having observed the customer support nightmare it tends to cause for a small exchange service when 100% full blocks happen, I've been thinking that the limit really should be dynamic and respond to demand and the amount of fees offered. It just doesn't feel right when it takes ages to burn through the backlog when 100% full is hit for a while. So, while pondering this, I got an idea that I think has a chance of working that I can't remember seeing suggested anywhere. How about basing the maximum valid size for a block on the total bitcoin days destroyed in that block? That should still stop transaction spam but naturally expand the block size when there's a backlog of real transactions. It'd also provide for an indirect mechanism for increasing the maximum block size based on fees if there's a lot of fees but little bitcoin days destroyed. In such a situation there'd be incentive to pay someone to spend an older txout to expand the maximum. I realize this is a rather half baked idea, but it seems worth considering. - Joel On Thu, May 7, 2015 at 10:31 PM, Alan Reiner <etotheipi@...> wrote: > This *is* urgent and needs to be handled right now, and I believe Gavin > has the best approach to this. I have heard Gavin's talks on increasing > the block size, and the two most persuasive points to me were: > > (1) Blocks are essentially nearing "full" now. And by "full" he means > that the reliability of the network (from the average user perspective) is > about to be impacted in a very negative way (I believe it was due to the > inconsistent time between blocks). I think Gavin said that his simulations > showed 400 kB - 600 kB worth of transactions per 10 min (approx 3-4 tps) is > where things start to behave poorly for certain classes of transactions. > In other words, we're very close to the effective limit in terms of > maintaining the current "standard of living", and with a year needed to > raise the block time this actually is urgent. > > (2) Leveraging fee pressure at 1MB to solve the problem is actually really > a bad idea. It's really bad while Bitcoin is still growing, and relying on > fee pressure at 1 MB severely impacts attractiveness and adoption potential > of Bitcoin (due to high fees and unreliability). But more importantly, it > ignores the fact that for a 7 tps is pathetic for a global transaction > system. It is a couple orders of magnitude too low for any meaningful > commercial activity to occur. If we continue with a cap of 7 tps forever, > Bitcoin *will* fail. Or at best, it will fail to be useful for the vast > majority of the world (which probably leads to failure). We shouldn't be > talking about fee pressure until we hit 700 tps, which is probably still > too low. > > You can argue that side chains and payment channels could alleviate this. > But how far off are they? We're going to hit effective 1MB limits long > before we can leverage those in a meaningful way. Even if everyone used > them, getting a billion people onto the system just can't happen even at 1 > transaction per year per person to get into a payment channel or move money > between side chains. > > We get asked all the time by corporate clients about scalability. A limit > of 7 tps makes them uncomfortable that they are going to invest all this > time into a system that has no chance of handling the economic activity > that they expect it handle. We always assure them that 7 tps is not the > final answer. > > Satoshi didn't believe 1 MB blocks were the correct answer. I personally > think this is critical to Bitcoin's long term future. And I'm not sure > what else Gavin could've done to push this along in a meaninful way. > > -Alan > > > > On 05/07/2015 02:06 PM, Mike Hearn wrote: > > I think you are rubbing against your own presupposition that people >> must find and alternative right now. Quite a lot here do not believe there >> is any urgency, nor that there is an immanent problem that has to be solved >> before the sky falls in. >> > > I have explained why I believe there is some urgency, whereby "some > urgency" I mean, assuming it takes months to implement, merge, test, > release and for people to upgrade. > > But if it makes you happy, imagine that this discussion happens all over > again next year and I ask the same question. > > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight.http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > > > > _______________________________________________ > Bitcoin-development mailing listBitcoin-development@...nethttps://lists.sourceforge.net/lists/listinfo/bitcoin-development > > > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Bitcoin-development mailing list > Bitcoin-development@... > https://lists.sourceforge.net/lists/listinfo/bitcoin-development > > |
From: Tom Harding <tomh@th...> - 2015-05-08 01:41:15
|
On 5/7/2015 12:54 PM, Jeff Garzik wrote: > In the short term, blocks are bursty, with some on 1 minute intervals, > some with 60 minute intervals. This does not change with larger blocks. > I'm pretty sure Alan meant that blocks are already filling up after long inter-block intervals. > > 2) Where do you want to go? Should bitcoin scale up to handle all the > world's coffees? Alan was very clear. Right now, he wants to go exactly where Gavin's concrete proposal suggests. |
From: Peter Todd <pete@pe...> - 2015-05-08 00:06:09
|
On Thu, May 07, 2015 at 10:02:09PM +0000, Matt Corallo wrote: > OK, so lets do that. I've seen a lot of "I'm not entirely comfortable > with committing to this right now, but think we should eventually", but > not much "I'd be comfortable with committing to this when I see X". In > the interest of ignoring debate and pushing people towards a consensus > at all costs, ( ;) ) I'm gonna go ahead and suggest we talk about the > second. > > Personally, there are several things that worry me significantly about > committing to a blocksize increase, which I'd like to see resolved > before I'd consider supporting a blocksize increase commitment. > > * Though there are many proposals floating around which could > significantly decrease block propagation latency, none of them are > implemented today. I'd expect to see these not only implemented but > being used in production (though I dont particularly care about them > being all that stable). I'd want to see measurements of how they perform > both in production and in the face of high packet loss (eg across the > GFW or in the case of small/moderate DoS). In addition, I'd expect to > see analysis of how these systems perform in the worst-case, not just > packet-loss-wise, but in the face of miners attempting to break the system. It's really important that we remember that we're building security software: it *must* hold up well even in the face of attack. That means we need to figure out how it can be attacked, what the cost/profits of such attacks are, and if the holes can be patched. Just testing the software with simulated loads is insufficient. Also, re: breaking, don't forget that this may not be a malicious act. For instance, someone can send contradictory transactions to different parts of the network simultaneously to prevent mempool consistency - there's no easy way to fix this. There are also cases where miners have different policy than others, e.g. version disagreements, commercial contracts for tx mining, etc. Finally, remember that it's not in miners' incentives in many situations for their blocks to propagate to more than ~30% of the hashing power.(1) Personally, I'm really skeptical that we'll ever find a block propagation latency reduction technique that sucesfully meets all the above criteria without changing the consensus algorithm itself. * How do we ensure miners don't cheat and stop validating blocks fully before building on them? This is a significant moral hazard with larger blocks if fees don't become significant, and can lead to dangerous forks. Also, think of the incentives: Why would a miner ever switch from the longest chain, even if they don't actually have the blocks to back it up? * We need a clear understanding of how we expect new full nodes, pruned or not, to sync up to the blockchain. Obviously 20MB blocks significantly increases the time and data required to sync. Are we planning on simply giving up on full validation and trusting others for copies of UTXO sets? Are we going to rely on UTXO commitments? What happens if the UTXO set size itself increases greatly? > * I'd very much like to see someone working on better scaling > technology, both in terms of development and in terms of getting > traction in the marketplace. I know StrawPay is working on development, > though its not obvious to me how far they are from their website, but I > dont know of any commitments by large players (either SPV wallets, > centralized wallet services, payment processors, or any others) to > support such a system (to be fair, its probably too early for such > players to commit to anything, since anything doesnt exist in public). A good start would be for those players to commit to the general principles of these systems; if they can't commit explain why. For instance I'd be very interested in knowing if services like Coinbase see legal issues with adopting technologies such as payment channels between hosted wallet providers, payment processors, etc. I certainly wouldn't be surprised if they see doing anythign not on-blockchain as a source of legal uncertainty - based on discussions I've had with regulatory types in this space it sounds like there's a reasonable chance protocol details such as requiring that transactions happen on a public blockchain will be "baked into" regulatory requirements. > * I'd like to see some better conclusions to the discussion around > long-term incentives within the system. If we're just building Bitcoin > to work in five years, great, but if we want it all to keep working as > subsidy drops significantly, I'd like a better answer than "we'll deal > with it when we get there" or "it will happen, all the predictions based > on people's behavior today say so" (which are hopefully invalid thanks > to the previous point). Ideally, I'd love to see some real free pressure > already on the network starting to develop when we commit to hardforking > in a year. Agreed. > Not just full blocks with some fees because wallets are > including far greater fees than they really need to, but software which > properly handles fees across the ecosystem, smart fee increases when > transactions arent confirming (eg replace-by-fee, which could be limited > to increase-in-fees-only for those worried about double-spends). FWIW I've got some funding to implement first-seen-safe replace-by-fee. 1) http://www.mail-archive.com/bitcoin-development@.../msg03200.html -- 'peter'[:-1]@petertodd.org 00000000000000000fe0a96ac84aeb2e4e5c246e947cd8e759bd5fb158a16caf |
From: Joseph Poon <joseph@li...> - 2015-05-07 23:51:32
|
Hi Matt, I agree that starting discussion on how to approach this problem is necessary and it's difficult taking positions without details on what is being discussed. A simple hard 20-megabyte increase will likely create perverse incentives, perhaps a method can exist with some safe transition. I think ultimately, the underlying tension with this discussion is about the relative power of miners. Any transition of blocksize increase will increase the influence of miners, and it is about understanding the tradeoffs for each possible approach. On Thu, May 07, 2015 at 10:02:09PM +0000, Matt Corallo wrote: > * I'd like to see some better conclusions to the discussion around > long-term incentives within the system. If we're just building Bitcoin > to work in five years, great, but if we want it all to keep working as > subsidy drops significantly, I'd like a better answer than "we'll deal > with it when we get there" or "it will happen, all the predictions based > on people's behavior today say so" (which are hopefully invalid thanks > to the previous point). Ideally, I'd love to see some real free pressure > already on the network starting to develop when we commit to hardforking > in a year. Not just full blocks with some fees because wallets are > including far greater fees than they really need to, but software which > properly handles fees across the ecosystem, smart fee increases when > transactions arent confirming (eg replace-by-fee, which could be limited > to increase-in-fees-only for those worried about double-spends). I think the long-term fee incentive structure needs to be significantly more granular. We've all seen miners and pools take the path of least resistance; often they just do whatever the community tells them to blindly. While this status quo can change in the future, I think designing sane defaults is a good path for any possible transition. It seems especially reasonable to maintain fee pressure for normal transactions during a hard-fork transition. It's possible to do so using some kind of soft-cap structure. Building in a default soft-cap of 1 megabyte for some far future scheduled fork would seem like a sane thing to do for bitcoin-core. It seems also viable to be far more aggressive. What's your (and the community's) opinion on some kind of coinbase voting protocol for soft-cap enforcement? It's possible to write in messages to the coinbase for a enforcible soft-cap that orphans out any transaction which violates these rules. It seems safest to have the transition has the first hardforked block be above 1MB, however, the next block default to an enforced 1MB block. If miners agree to go above this, they must vote in their coinbase to do so. There's a separate discussion about this starting on: CAE-z3OXnjayLUeHBU0hdwU5pKrJ6fpj7YPtGBMQ7hKXG3Sj6hw@... I think defaulting some kind of mechanism on reading the coinbase seems to be a good idea, I think left alone, miners may not do so. That way, it's possible to have your cake and eat it too, fee pressure will still exist, while block sizes can increase (provided it's in the miners' greater interests to do so). The Lightning Network's security model in the long-term may rely on a multi-tier soft-cap, but I'm not sure. If 2nd order systemic miner incentives were not a concern, a system which has an enforced soft-cap and permits breaching that soft-cap with some agreed upon much higher fee would work best. LN works without this, but it seems to be more secure if some kind of miner consensus rule is reached regarding prioritizing behavior of 2nd-layer consensus states. No matter how it's done, certain aspects of the security model of something like Lightning is reliant upon having block-space availability for transactions to enter into the blockchain in a timely manner (since "deprecated" channel states become valid again after some agreed upon block-time). I think pretty much everyone agrees that the 1MB block cap will eventually be a problem. While people may disagree with when that will be and how it'll play out, I think we're all in agreement that discussion about it is a good idea, especially when it comes to resolving blocking concerns. Starting a discussion on how a hypothetical blocksize increase will occur and the necessary blocking/want-to-have features/tradeoffs seems to be a great way to approach this problem. The needs for Lightning Network may be best optimized by being able to prioritizing a large mass of timeout transactions at once (when a well-connected node stops communicating). -- Joseph Poon |
From: Tier Nolan <tier.nolan@gm...> - 2015-05-07 23:32:21
|
One of the suggestions to avoid the problem of fees going to zero is assurance contracts. This lets users (perhaps large merchants or exchanges) pay to support the network. If insufficient people pay for the contract, then it fails. Mike Hearn suggests one way of achieving it, but it doesn't actually create an assurance contract. Miners can exploit the system to convert the pledges into donations. https://bitcointalk.org/index.php?topic=157141.msg1821770#msg1821770 Consider a situation in the future where the minting fee has dropped to almost zero. A merchant wants to cause block number 1 million to effectively have a minting fee of 50BTC. He creates a transaction with one input (0.1BTC) and one output (50BTC) and signs it using SIGHASH_ANYONE_CAN_PAY. The output pays to OP_TRUE. This means that anyone can spend it. The miner who includes the transaction will send it to an address he controls (or pay to fee). The transaction has a locktime of 1 million, so that it cannot be included before that point. This transaction cannot be included in a block, since the inputs are lower than the outputs. The SIGHASH_ANYONE_CAN_PAY field mean that others can pledge additional funds. They add more input to add more money and the same sighash. There would need to be some kind of notice boeard system for these pledges, but if enough pledge, then a valid transaction can be created. It is in miner's interests to maintain such a notice board. The problem is that it counts as a pure donation. Even if only 10BTC has been pledged, a miner can just add 40BTC of his own money and finish the transaction. He nets the 10BTC of the pledges if he wins the block. If he loses, nobody sees his 40BTC transaction. The only risk is if his block is orphaned and somehow the miner who mines the winning block gets his 40BTC transaction into his block. The assurance contract was supposed to mean "If the effective minting fee for block 1 million is 50 BTC, then I will pay 0.1BTC". By adding his 40BTC to the transaction the miner converts it to a pure donation. The key point is that *other* miners don't get 50BTC reward if they find the block, so it doesn't push up the total hashing power being committed to the blockchain, that a 50BTC minting fee would achieve. This is the whole point of the assurance contract. OP_CHECKLOCKTIMEVERIFY could be used to solve the problem. Instead of paying to OP_TRUE, the transaction should pay 50 BTC to "<1 million> OP_CHECKLOCKTIMEVERIFY OP_TRUE" and 0.01BTC to "OP_TRUE". This means that the transaction could be included into a block well in advance of the 1 million block point. Once block 1 million arrives, any miner would be able to spend the 50 BTC. The 0.01BTC is the fee for the block the transaction is included in. If the contract hasn't been included in a block well in advance, pledgers would be recommended to spend their pledged input, It can be used to pledge to many blocks at once. The transaction could pay out to lots of 50BTC outputs but with the locktime increasing by for each output. For high value transactions, it isn't just the POW of the next block that matters but all the blocks that are built on top of it. A pledger might want to say "I will pay 1BTC if the next 100 blocks all have at least an effective minting fee of 50BTC" |
From: Nicolas DORIER <nicolas.dorier@gm...> - 2015-05-07 23:14:29
|
Executive Summary: I explain the objectives that we should aim to reach agreement without drama, controversy, and relief the core devs from the central banker role. (As Jeff Garzik pointed out) Knowing the objectives, I propose a solution based on the objectives that can be agreed on tomorrow, would permanently fix the block size problem without controversy and would be immediately applicable. The objectives: There is consensus on the fact that nobody wants the core developers to be seen as central bankers. There is also consensus that more decentralization is better than less. (assuming there is no cost to it) This means you should reject all arguments based on economical, political and ideological principles about what Bitcoin should become. This includes: 1) Whether Bitcoin should be storage of value or suitable for coffee transaction, 2) Whether we need a fee market, block scarcity, and how much of it, 3) Whether we need to periodically increase block size via some voodoo formula which speculate on future bandwidth and cost of storage, Taking decisions based on such reasons is what central bankers do, and you don’t want to be bankers. This follow that decisions should be taken only for technical and decentralization considerations. (more about decentralization after) Scarcity will evolve without you taking any decisions about it, for the only reason that storage and bandwidth is not free, nor a transaction, thanks to increased propagation time. This backed in scarcity will evolve automatically as storage, bandwidth, encoding, evolve without anybody taking any decision, nor making any speculation on the future. Sadly, deciding how much decentralization should be in the system by tweaking the block size limit is also an economic decision that should not have its place between the core devs. This follow : 4) Core devs should not decide about the amount of suitable decentralization by tweaking block size limit, Still, removing the limit altogether is a no-no, what would happen if a block of 100 GB is created? Immediately the network would be decentralized, not only for miners but also for bitcoin service providers. Also, core devs might have technical consideration on bitcoin core which impose a temporary limit until the bug resolved. The solution: So here is a proposal that address all my points, and, I think, would get a reasonable consensus. It can be published tomorrow without any controversy, would be agreed in one year, and can be safely reiterated every year. Developers will also not have to play politics nor central banker. (well, it sounds to good to be true, I waiting for being wrong) The solution is to use block voting. For each block, a miner gives the size of the block he would like to have at the next deadline (for example, 30 may 2015). The rational choice for them is just enough to clear the memory pool, maybe a little less if he believes fee pressure is beneficial for him, maybe a little more if he believes he should leave some room for increased use. At the deadline, we take the median of the votes and implement it as a new block size limit. Reiterate for the next year. Objectives reached: - No central banking decisions on devs shoulder, - Votes can start tomorrow, - Implementation has only to be ready in one year, (no kick-in-the-can) - Will increase as demand is growing, - Will increase as network capacity and storage is growing, - Bitcoin becomes what miners want, not what core devs and politician wants, - Implementation reasonably easy, - Will get miner consensus, no impact on existing bitcoin services, Unknown: - Effect on bitcoin core stability (core devs might have a valid technical reason to impose a limit) - Maybe a better statistical function is possible Additional input for the debate: Some people were debating whether miners are altruist or act rationally. We should always expect them to act rationally, but we should not forget the peculiarity of TCP backoff game: While it is in the best interest of players to NOT reemit TCP packet with a backoff if the ACK is not received, everybody does it. (Because of the fallacy that changing a TCP implementation is costless) Often, when we think a real life situation is a prisoner dilemma problem, it turns out that the incentives where just incorrectly modeled. Core devs, thanks for all your work, but please step out of the banker's role and focus on where you are the best, I speak as an entrepreneur that doesn't want decisions about bitcoin to be taken by who has the biggest. If the decision of the hard limit is taken for other than purely technical decisions, ie, for the maximization of whatever metric, it will clearly put you in banker's shoes. As an entrepreneur, I have other things to speculate than who gets the biggest gun in the core team. Please consider my solution, Nicolas Dorier, |
From: 21E14 <21xe14@gm...> - 2015-05-07 23:05:37
|
I am more fazed by PR 5288 and PR 5925 not getting merged in, than by this thread. So, casting my ballot in favor of the block size increase. Clearly, we're still rehearsing proper discourse, and that ain't gonna get fixed here and now. On Thu, May 7, 2015 at 9:29 PM, Matt Corallo <bitcoin-list@...> wrote: > > > On 05/07/15 19:34, Mike Hearn wrote: > > The appropriate method of doing any fork, that we seem to have been > > following for a long time, is to get consensus here and on IRC and on > > github and *then* go pitch to the general public > > > > > > So your concern is just about the ordering and process of things, and > > not about the change itself? > > No, I'm very concerned about both. > > > I have witnessed many arguments in IRC about block sizes over the years. > > There was another one just a few weeks ago. Pieter left the channel for > > his own sanity. IRC is not a good medium for arriving at decisions on > > things - many people can't afford to sit on IRC all day and > > conversations can be hard to follow. Additionally, they tend to go > circular. > > I agree, thats why this mailing list was created in the first place > (well, also because bitcointalk is too full of spam, but close enought :)) > > > That said, I don't know if you can draw a line between the "ins" and > > "outs" like that. The general public is watching, commenting and > > deciding no matter what. Might as well deal with that and debate in a > > format more accessible to all. > > Its true, just like its true the general public can opt to run any > version of software they want. That said, the greater software > development community has to update /all/ the software across the entire > ecosystem, and thus provide what amounts to a strong recommendation of > which course to take. Additionally, though there are issues (eg if there > was a push to remove the total coin limit) which are purely political, > and thus which should be up to the greater public to decide, the > blocksize increase is not that. It is intricately tied to Bitcoin's > delicate incentive structure, which many of the development community > are far more farmiliar with than the general Bitcoin public. If there > were a listserv that was comprised primarily of people on > #bitcoin-wizards, I might have suggested a discussion there, first, but > there isnt (as far as I know?). > > > If, instead, there had been an intro on the list as "I think we > should > > do the blocksize increase soon, what do people think?" > > > > > > There have been many such discussions over time. On bitcointalk. On > > reddit. On IRC. At developer conferences. Gavin already knew what many > > of the objections would be, which is why he started answering them. > > > > But alright. Let's say he should have started a thread. Thanks for > > starting it for him. > > > > Now, can we get this specific list of things we should do before we're > > prepared? > > Yes....I'm gonna split the topic since this is already far off course > for that :). > > > A specific credible alternative to what? Committing to blocksize > > increases tomorrow? Yes, doing more research into this and developing > > software around supporting larger block sizes so people feel > comfortable > > doing it in six months. > > > > > > Do you have a specific research suggestion? Gavin has run simulations > > across the internet with modified full nodes that use 20mb blocks, using > > real data from the block chain. They seem to suggest it works OK. > > > > What software do you have in mind? > > Let me answer that in a new thread :). > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Bitcoin-development mailing list > Bitcoin-development@... > https://lists.sourceforge.net/lists/listinfo/bitcoin-development > |
From: Roy Badami <roy@gn...> - 2015-05-07 22:09:00
|
On Thu, May 07, 2015 at 11:49:28PM +0200, Pieter Wuille wrote: > I would not modify my node if the change introduced a perpetual 100 BTC > subsidy per block, even if 99% of miners went along with it. Surely, in that scenario Bitcoin is dead. If the fork you prefer has only 1% of the hash power it is trivially vulnerably not just to a 51% attack but to a 501% attack, not to mention the fact that you'd only be getting one block every 16 hours. > > A hardfork is safe when 100% of (economically relevant) users upgrade. If > miners don't upgrade at that point, they just lose money. > > This is why a hashrate-triggered hardfork does not make sense. Either you > believe everyone will upgrade anyway, and the hashrate doesn't matter. Or > you are not certain, and the fork is risky, independent of what hashrate > upgrades. Beliefs are all very well, but they can be wrong. Of course we should not go ahead with a fork that we believe to be dangerous, but requiring a supermajority of miners is surely a wise precaution. I fail to see any realistic scenario where 99% of miners vote for the hard fork to go ahead, and the econonomic majority votes to stay on the blockchain whose hashrate has just dropped two orders of magnitude - so low that the mean time between blocks is now over 16 hours. > > And the march 2013 fork showed that miners upgrade at a different schedule > than the rest of the network. > On May 7, 2015 5:44 PM, "Roy Badami" <roy@...> wrote: > > > > > > On the other hand, if 99.99% of the miners updated and only 75% of > > > merchants and 75% of users updated, then that would be a serioud split of > > > the network. > > > > But is that a plausible scenario? Certainly *if* the concensus rules > > required a 99% supermajority of miners for the hard fork to go ahead, > > then there would be absoltely no rational reason for merchants and > > users to refuse to upgrade, even if they don't support the changes > > introduces by the hard fork. Their only choice, if the fork succeeds, > > is between the active chain and the one that is effectively stalled - > > and, of course, they can make that choice ahead of time. > > > > roy > > > > > > ------------------------------------------------------------------------------ > > One dashboard for servers and applications across Physical-Virtual-Cloud > > Widest out-of-the-box monitoring support with 50+ applications > > Performance metrics, stats and reports that give you Actionable Insights > > Deep dive visibility with transaction tracing using APM Insight. > > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > > _______________________________________________ > > Bitcoin-development mailing list > > Bitcoin-development@... > > https://lists.sourceforge.net/lists/listinfo/bitcoin-development > > |
From: Matt Corallo <bitcoin-list@bl...> - 2015-05-07 22:02:18
|
OK, so lets do that. I've seen a lot of "I'm not entirely comfortable with committing to this right now, but think we should eventually", but not much "I'd be comfortable with committing to this when I see X". In the interest of ignoring debate and pushing people towards a consensus at all costs, ( ;) ) I'm gonna go ahead and suggest we talk about the second. Personally, there are several things that worry me significantly about committing to a blocksize increase, which I'd like to see resolved before I'd consider supporting a blocksize increase commitment. * Though there are many proposals floating around which could significantly decrease block propagation latency, none of them are implemented today. I'd expect to see these not only implemented but being used in production (though I dont particularly care about them being all that stable). I'd want to see measurements of how they perform both in production and in the face of high packet loss (eg across the GFW or in the case of small/moderate DoS). In addition, I'd expect to see analysis of how these systems perform in the worst-case, not just packet-loss-wise, but in the face of miners attempting to break the system. * I'd very much like to see someone working on better scaling technology, both in terms of development and in terms of getting traction in the marketplace. I know StrawPay is working on development, though its not obvious to me how far they are from their website, but I dont know of any commitments by large players (either SPV wallets, centralized wallet services, payment processors, or any others) to support such a system (to be fair, its probably too early for such players to commit to anything, since anything doesnt exist in public). * I'd like to see some better conclusions to the discussion around long-term incentives within the system. If we're just building Bitcoin to work in five years, great, but if we want it all to keep working as subsidy drops significantly, I'd like a better answer than "we'll deal with it when we get there" or "it will happen, all the predictions based on people's behavior today say so" (which are hopefully invalid thanks to the previous point). Ideally, I'd love to see some real free pressure already on the network starting to develop when we commit to hardforking in a year. Not just full blocks with some fees because wallets are including far greater fees than they really need to, but software which properly handles fees across the ecosystem, smart fee increases when transactions arent confirming (eg replace-by-fee, which could be limited to increase-in-fees-only for those worried about double-spends). I probably forgot one or two and certainly dont want to back myself into a corner on committing to something here, but those are a few things I see today as big blockers on larger blocks. Luckily, people have been making progress on building the software needed in all of the above for a while now, but I think they're all very, very immature today. On 05/07/15 19:13, Jeff Garzik wrote:> On Thu, May 7, 2015 at 3:03 PM, Matt Corallo <bitcoin-list@... > <mailto:bitcoin-list@...>> wrote: -snip- >> If, instead, there had been an intro on the list as "I think we should >> do the blocksize increase soon, what do people think?", the response >> could likely have focused much more around creating a specific list of >> things we should do before we (the technical community) think we are >> prepared for a blocksize increase. > > Agreed, but that is water under the bridge at this point. You - rightly > - opened the topic here and now we're discussing it. > > Mike and Gavin are due the benefit of doubt because making a change to a > leaderless automaton powered by leaderless open source software is > breaking new ground. I don't focus so much on how we got to this point, > but rather, where we go from here. |
From: Pieter Wuille <pieter.wuille@gm...> - 2015-05-07 21:49:36
|
I would not modify my node if the change introduced a perpetual 100 BTC subsidy per block, even if 99% of miners went along with it. A hardfork is safe when 100% of (economically relevant) users upgrade. If miners don't upgrade at that point, they just lose money. This is why a hashrate-triggered hardfork does not make sense. Either you believe everyone will upgrade anyway, and the hashrate doesn't matter. Or you are not certain, and the fork is risky, independent of what hashrate upgrades. And the march 2013 fork showed that miners upgrade at a different schedule than the rest of the network. On May 7, 2015 5:44 PM, "Roy Badami" <roy@...> wrote: > > > On the other hand, if 99.99% of the miners updated and only 75% of > > merchants and 75% of users updated, then that would be a serioud split of > > the network. > > But is that a plausible scenario? Certainly *if* the concensus rules > required a 99% supermajority of miners for the hard fork to go ahead, > then there would be absoltely no rational reason for merchants and > users to refuse to upgrade, even if they don't support the changes > introduces by the hard fork. Their only choice, if the fork succeeds, > is between the active chain and the one that is effectively stalled - > and, of course, they can make that choice ahead of time. > > roy > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Bitcoin-development mailing list > Bitcoin-development@... > https://lists.sourceforge.net/lists/listinfo/bitcoin-development > |
From: Roy Badami <roy@gn...> - 2015-05-07 21:42:13
|
> On the other hand, if 99.99% of the miners updated and only 75% of > merchants and 75% of users updated, then that would be a serioud split of > the network. But is that a plausible scenario? Certainly *if* the concensus rules required a 99% supermajority of miners for the hard fork to go ahead, then there would be absoltely no rational reason for merchants and users to refuse to upgrade, even if they don't support the changes introduces by the hard fork. Their only choice, if the fork succeeds, is between the active chain and the one that is effectively stalled - and, of course, they can make that choice ahead of time. roy |
From: Matt Corallo <bitcoin-list@bl...> - 2015-05-07 21:29:11
|
On 05/07/15 19:34, Mike Hearn wrote: > The appropriate method of doing any fork, that we seem to have been > following for a long time, is to get consensus here and on IRC and on > github and *then* go pitch to the general public > > > So your concern is just about the ordering and process of things, and > not about the change itself? No, I'm very concerned about both. > I have witnessed many arguments in IRC about block sizes over the years. > There was another one just a few weeks ago. Pieter left the channel for > his own sanity. IRC is not a good medium for arriving at decisions on > things - many people can't afford to sit on IRC all day and > conversations can be hard to follow. Additionally, they tend to go circular. I agree, thats why this mailing list was created in the first place (well, also because bitcointalk is too full of spam, but close enought :)) > That said, I don't know if you can draw a line between the "ins" and > "outs" like that. The general public is watching, commenting and > deciding no matter what. Might as well deal with that and debate in a > format more accessible to all. Its true, just like its true the general public can opt to run any version of software they want. That said, the greater software development community has to update /all/ the software across the entire ecosystem, and thus provide what amounts to a strong recommendation of which course to take. Additionally, though there are issues (eg if there was a push to remove the total coin limit) which are purely political, and thus which should be up to the greater public to decide, the blocksize increase is not that. It is intricately tied to Bitcoin's delicate incentive structure, which many of the development community are far more farmiliar with than the general Bitcoin public. If there were a listserv that was comprised primarily of people on #bitcoin-wizards, I might have suggested a discussion there, first, but there isnt (as far as I know?). > If, instead, there had been an intro on the list as "I think we should > do the blocksize increase soon, what do people think?" > > > There have been many such discussions over time. On bitcointalk. On > reddit. On IRC. At developer conferences. Gavin already knew what many > of the objections would be, which is why he started answering them. > > But alright. Let's say he should have started a thread. Thanks for > starting it for him. > > Now, can we get this specific list of things we should do before we're > prepared? Yes....I'm gonna split the topic since this is already far off course for that :). > A specific credible alternative to what? Committing to blocksize > increases tomorrow? Yes, doing more research into this and developing > software around supporting larger block sizes so people feel comfortable > doing it in six months. > > > Do you have a specific research suggestion? Gavin has run simulations > across the internet with modified full nodes that use 20mb blocks, using > real data from the block chain. They seem to suggest it works OK. > > What software do you have in mind? Let me answer that in a new thread :). |
From: Tier Nolan <tier.nolan@gm...> - 2015-05-07 21:24:53
|
In terms of miners, a strong supermajority is arguably sufficient, even 75% would be enough. The near total consensus required is merchants and users. If (almost) all merchants and users updated and only 75% of the miners updated, then that would give a successful hard-fork. On the other hand, if 99.99% of the miners updated and only 75% of merchants and 75% of users updated, then that would be a serioud split of the network. The advantage of strong miner support is that it effectively kills the fork that follows the old rules. The 25% of merchants and users sees a blockchain stall. Miners are likely to switch to the fork that is worth the most. A mining pool could even give 2 different sub-domains. A hasher can pick which rule-set to follow. Most likely, they would converge on the fork which paid the most, but the old ruleset would likely still have some hashing power and would eventually re-target. On Thu, May 7, 2015 at 9:00 PM, Roy Badami <roy@...> wrote: > I'd love to have more discussion of exactly how a hard fork should be > implemented. I think it might actually be of some value to have rough > consensus on that before we get too bogged down with exactly what the > proposed hard fork should do. After all, how can we debate whether a > particular hard fork proposal has consensus if we haven't even decided > what level of supermajority is needed to establish consensus? > > For instance, back in 2012 Gavin was proposing, effectively, that a > hard fork should require a supermajority of 99% of miners in order to > succeed: > > https://gist.github.com/gavinandresen/2355445 > > More recently, Gavin has proposed that a supermoajority of only 80% of > miners should be needed in order to trigger the hard fork. > > > http://www.gavintech.blogspot.co.uk/2015/01/twenty-megabytes-testing-results.html > > Just now, on this list (see attached message) Gavin seems to be > aluding to some mechanism for a hard fork which involves consensus of > full nodes, and then a soft fork preceeding the hard fork, which I'd > love to see a full explanation of. > > FWIW, I think 80% is far too low to establish consensus for a hard > fork. I think the supermajority of miners should be sufficiently > large that the rump doesn't constitute a viable coin. If you don't > have that very strong level of consensus then you risk forking Bitcoin > into two competing coins (and I believe we already have one exchange > promissing to trade both forks as long as the blockchains are alive). > > As a starting point, I think 35/36th of miners (approximately 97.2%) > is the minimum I would be comfortable with. It means that the rump > coin will initially have an average confirmation time of 6 hours > (until difficulty, very slowly, adjusts) which is probably far enough > from viable that the majority of holdouts will quickly desert it too. > > Thoughs? > > roy > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Bitcoin-development mailing list > Bitcoin-development@... > https://lists.sourceforge.net/lists/listinfo/bitcoin-development > > |
From: Roy Badami <roy@gn...> - 2015-05-07 20:30:39
|
I'd love to have more discussion of exactly how a hard fork should be implemented. I think it might actually be of some value to have rough consensus on that before we get too bogged down with exactly what the proposed hard fork should do. After all, how can we debate whether a particular hard fork proposal has consensus if we haven't even decided what level of supermajority is needed to establish consensus? For instance, back in 2012 Gavin was proposing, effectively, that a hard fork should require a supermajority of 99% of miners in order to succeed: https://gist.github.com/gavinandresen/2355445 More recently, Gavin has proposed that a supermoajority of only 80% of miners should be needed in order to trigger the hard fork. http://www.gavintech.blogspot.co.uk/2015/01/twenty-megabytes-testing-results.html Just now, on this list (see attached message) Gavin seems to be aluding to some mechanism for a hard fork which involves consensus of full nodes, and then a soft fork preceeding the hard fork, which I'd love to see a full explanation of. FWIW, I think 80% is far too low to establish consensus for a hard fork. I think the supermajority of miners should be sufficiently large that the rump doesn't constitute a viable coin. If you don't have that very strong level of consensus then you risk forking Bitcoin into two competing coins (and I believe we already have one exchange promissing to trade both forks as long as the blockchains are alive). As a starting point, I think 35/36th of miners (approximately 97.2%) is the minimum I would be comfortable with. It means that the rump coin will initially have an average confirmation time of 6 hours (until difficulty, very slowly, adjusts) which is probably far enough from viable that the majority of holdouts will quickly desert it too. Thoughs? roy |
From: Jérémie Dubois-Lacoste <jeremie.dl@gm...> - 2015-05-07 20:20:10
|
> I have written up an explanation of what I think will happen if we run out > of capacity: > > https://medium.com/@octskyward/crash-landing-f5cc19908e32 Looks like a solid description of what would happen. I fail to see how this description wouldn't be applicable also to a 20MB-network in some time in the future, say ~3 years from now, if Bitcoin keeps taking off. If you agree that it will be harder in the future to change the block limit again, and we switch to hardcoded 20MB, then aren't we just going from an immediate relief to a future larger blockage? > > Now I'm going to go eat some dinner :) > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Bitcoin-development mailing list > Bitcoin-development@... > https://lists.sourceforge.net/lists/listinfo/bitcoin-development > |
From: Justus Ranvier <justusranvier@ri...> - 2015-05-07 19:59:32
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 05/07/2015 09:54 PM, Jeff Garzik wrote: > By the time we get to fee pressure, in your scenario, our network > node count is tiny and highly centralized. Again, this assertion requires proof. Simply saying things is not the same as them being true. -----BEGIN PGP SIGNATURE----- iQIcBAEBAgAGBQJVS8QWAAoJECpf2nDq2eYj+/4P/2JXxo2RDAg0ptd9aUYVvzp9 KhL33cdmK8kbKBFOVcOuIrlQRzZn9iydIPC165Y40Y6Wrtgw2PoXctuqdQdXaSZI M3bHuM7mweHyb3xBHNaNHIxfwrMjQQAdOTGO7PZusghDYz2QEj44dhIcNOzO7uTD fXkhzgJfwu0l0Wqn3v/R9amRUWLE5nlM566xJ2sVtlfBMEyzR5L1GwX1lKNhxeO8 qvkgegsF2Usjz9pIUMSGFxSWZQuTSjHbhbh28JaT/wi6DI3pcTV0FPw95IPImqUh rbIqcPh43omXrHKEHV/FB+XMItD3VvR9dxogYaFZLv1EU2gnF2IM0cw5a/oyHr+L C920uEbXrvrMEJw1ftvxQyu6NY5c3/5iVMqz773oQSjOihkZ8P1JvxQnldU6mcoU RaKM13cxgjSkCqJ5R1iIldFQPCLLWUKJDkPEnGlwdLPF/vwhnCt1PZJTB5hqoCgC ab5yBVLpLgo7sbizOeX/R3WGp3NjGXDQC93Af/Vr37uiu1ZT+1P1Ow86hsZTRx6b 4d25tSGg7Tw3Bs/YOhJ9AKtlN092Y8/WBMscQu6MaFt6I/1OMX9OVH+veEj/VjwB L/dxWTRdC0HEKiYv+EuESIRoyTLlCHKBUDBgKbYSMjetg6WW64hYrpxNX7TH20o6 00bWPVV2PcEWuCc230UF =1bK6 -----END PGP SIGNATURE----- |
From: Btc Drak <btcdrak@gm...> - 2015-05-07 19:57:49
|
On Thu, May 7, 2015 at 5:11 PM, Mike Hearn <mike@...> wrote: > Right now there is this nice warm fuzzy notion that decisions in Bitcoin >> Core are made by consensus. "Controversial" changes are avoided. I am >> trying to show you that this is just marketing. > > Consensus is arrived when the people who are most active at the time (active in contributing to discussions, code review, giving opinions etc.) agreed to ACK. There are a regular staple of active contributors. Bitcoin development is clearly a meritocracy. The more people participate and contribute the more weight their opinions hold. > Nobody can define what these terms even mean. It would be more accurate to >> say decisions are vetoed by whoever shows up and complains enough, >> regardless of technical merit. After all, my own getutxo change was merged >> after a lot of technical debate (and trolling) ..... then unmerged a day >> later because "it's a shitstorm". > > I am not sure that is fair, your PR was reverted because someone found a huge exploit in your PR enough to invalidate all your arguments used to get it merged in the first place. > So if Gavin showed up and complained a lot about side chains or whatever, > what you're saying is, oh that's different. We'd ignore him. But when > someone else complains about a change they don't like, that's OK. > > Heck, I could easily come up with a dozen reasons to object to almost any > change, if I felt like it. Would I then be considered not a part of the > consensus because that'd be convenient? > I don't think it's as simple as that. Objections for the sake of objections, or unsound technical objections are going to be seen for what they are. This is a project with of some of the brightest people in the world in this field. Sure people can be disruptive but their reputation stand the test of time. The consensus system might not be perfect, but it almost feels like you want to declare a state of emergency and suspend all the normal review process for this proposed hard fork. |
From: Bernard Rihn <bernie@ha...> - 2015-05-07 19:54:43
|
It seems to me like some (maybe most) of the pressure is actually external from companies that might release something that dramatically increases "adoption" & transaction rates (and that the data on historic rate of adoption & slumps is somewhat disconnected from their interests in a quick roll-out)? It seems like the question actually becomes what is our maximum acceptable cost (hardware capex & bandwidth & power opex) associated with running a full node without hardware acceleration and with hardware acceleration (something which presumably "doesn't exist" yet)? Are we making the assumption that hardware acceleration for confirmation will become broadly available and that the primary limiter will become anonymous bandwidth? Excuse my ignorance, but I imagine somebody must have already looked at confirmation times vs. block size for various existing hardware platforms (like at least 3 or 4? maybe a minnowboard, old laptop, and modern desktop at least?)? Is there an easy way to setup bitcoind or some other script to test this? (happy to help) Re Moore's law: yeah, some say stuff like 5nm may never happen. We're already using EUV with plasma emitters, immersed reflective optics, and double-patterning... and in storage land switching to helium. Things may slow A LOT over the next couple decades and I'd guess that a quadratic increase (both in storage & compute) probably isn't a safe assumption. On Thu, May 7, 2015 at 11:46 AM, Btc Drak <btcdrak@...> wrote: > On Thu, May 7, 2015 at 7:40 PM, Gavin Costin <slashdevnull@...> > wrote: > >> Can anyone opposed to this proposal articulate in plain english the worst >> case scenario(s) if it goes ahead? >> >> Some people in the conversation appear to be uncomfortable, perturbed, >> defensive etc about the proposal …. But I am not seeing specifics on why it >> is not a feasible plan. >> > > See this response: > http://www.mail-archive.com/bitcoin-development@.../msg07462.html > > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Bitcoin-development mailing list > Bitcoin-development@... > https://lists.sourceforge.net/lists/listinfo/bitcoin-development > > |
From: Jeff Garzik <jgarzik@bi...> - 2015-05-07 19:54:41
|
On Thu, May 7, 2015 at 3:31 PM, Alan Reiner <etotheipi@...> wrote: > (1) Blocks are essentially nearing "full" now. And by "full" he means > that the reliability of the network (from the average user perspective) is > about to be impacted in a very negative way > Er, to be economically precise, "full" just means fees are no longer zero. Bitcoin behaves as it always has. It is no longer basically free to dump spam into the blockchain, as it is today. In the short term, blocks are bursty, with some on 1 minute intervals, some with 60 minute intervals. This does not change with larger blocks. > (2) Leveraging fee pressure at 1MB to solve the problem is actually really > a bad idea. It's really bad while Bitcoin is still growing, and relying on > fee pressure at 1 MB severely impacts attractiveness and adoption potential > of Bitcoin (due to high fees and unreliability). But more importantly, it > ignores the fact that for a 7 tps is pathetic for a global transaction > system. It is a couple orders of magnitude too low for any meaningful > commercial activity to occur. If we continue with a cap of 7 tps forever, > Bitcoin *will* fail. Or at best, it will fail to be useful for the vast > majority of the world (which probably leads to failure). We shouldn't be > talking about fee pressure until we hit 700 tps, which is probably still > too low. > [...] 1) Agree that 7 tps is too low 2) Where do you want to go? Should bitcoin scale up to handle all the world's coffees? This is hugely unrealistic. 700 tps is 100MB blocks, 14.4 GB/day -- just for a single feed. If you include relaying to multiple nodes, plus serving 500 million SPV clients en grosse, who has the capacity to run such a node? By the time we get to fee pressure, in your scenario, our network node count is tiny and highly centralized. 3) In RE "fee pressure" -- Do you see the moral hazard to a software-run system? It is an intentional, human decision to flood the market with supply, thereby altering the economics, forcing fees to remain low in the hopes of achieving adoption. I'm pro-bitcoin and obviously want to see bitcoin adoption - but I don't want to sacrifice every decentralized principle and become a central banker in order to get there. -- Jeff Garzik Bitcoin core developer and open source evangelist BitPay, Inc. https://bitpay.com/ |