The Real Blockchain Scalability Challenge

in #steem8 years ago

Blockchain technology has been unable to serve certain markets due to challenges scaling the protocol to handle the required throughput. There are two key measurements that impact blockchain scalability: real time performance and replay performance.

The Steem blockchain is able to process about 50,000 vote operations per second in real time and in replay. Each vote operation takes about 20us to apply. The problem is that we cannot ever allow the Steem blockchain to approach that transaction volume or no one would be able catch up to real time once they fell behind.

Ideally the replay time would not grow by more than 30 minutes per year. After 10 years of at-scale operation the blockchain would require 5 hours of replay time to catch up. Assuming we target Reddit scale of 500 transactions per second (real time), then we need a blockchain that is able to apply 500 * 60 * 60 * 24 * 365 transactions in 30*60 seconds. In other words we need a blockchain that has real-time processing capacity of 9 million transactions per second.

Even though we can process 50,000 votes in theory, in practice our blockchain has only scaled to 1000 transactions per second in the wild. We need to increase our real-world performance by a factor of 10,000 to achieve our target of 500 transactions per second (real time) while maintaining manageable replay times of 30 minutes per year of operation.

We can relax our requirements a bit and allow for 1 hour per year and only target 250 transactions per second. Now we only need to improve performance by a factor of 2500x.

Why Replay Performance Matters

It is only by replaying the blockchain that anyone can validate the current state. Without the ability to replay the blockchain in a timely manner it becomes increasingly difficult to deploy new nodes without trusting the derived state of past evaluations. Any time the derived state is corrupted, our database schema changes, or new capacity needs to be brought on line, blockchain administrators need to rebuild the state from the blockchain. When this happens it could result in days or weeks of service downtime while they catch up.

Just ask anyone tasked with deploying a Bitcoin service and having to replay the bitcoin blockchain. It can take over 12 hours on good hardware and they are only processing 5 transactions per second real time or an average of 2 transactions per second over the last 8 years.

Steemit has a Solution

I have long said that single threaded performance is the bottleneck of blockchain scalability. This is based upon the premise that every transaction has an impact on the global state and could potentially impact the validity of subsequent transactions. I have even indicated that synchronization overhead would negate any benefits of attempting to parallelize block evaluation.

We are determined to not let anything get in the way of scaling Steem and Steemit to an unlimited degree. This week our team has come up with a design and roadmap for scaling Steem to the speeds required to handle as many users and transactions as your all can throw at us. We are in the process of documenting our design and producing a roadmap for its development and deployment.

I can honestly say I have never been more excited about the potential for decentralized blockchains scaling to handle operations the size of Reddit and Facebook.

Sort:  

Dan, you and your team's ability and drive to continue to innovate and adapt is just so inspiring. Your innovationsand the progress of your projects are really starting to mount up in a way that clearly sets them apart within the crypto world. Graphene and Steem/Steemit are bringing blockchains to non crypto people everywhere. Could any other project say the same? This is a very very exciting announcement! Thanks

Hmmm... this is a surprise to hear actually. I applaud your description of the scalability issues; the surprise is in the analysis. Although I realize this is the first announcement for a new design concept and you are excited about it, there are several concerns I have based on reading this.

First, how does your analysis factor hardware performance increases over the next 10 years? Last year Intel announced a dramatic new memory fabrication technology (3D XPoint™) that will have a huge impact on hardware performance and architecture when it becomes available, and I suspect we'll be seeing this surface in the next year or 2.

You certainly don't have to convince me how important replay time is, as that was a killer in BitShares 0.x.x. However, I'm surprised by the numbers here. I thought you had plenty of room to scale for the future built in based on early testing of graphene on BitShares 2.0 last year. I recognize Steemit is different, and you specifically mention voting in your analysis. Is that what renders any comparison between BitShares 2.0 & Steem unreasonable? Are there other differences between Steem & BitShares besides voting that make scaling Steem so much more difficult than BitShares?

Second, I'm concerned about how this announcement will impact Steemit. I'd hate to see a repeat of the issues that surfaced from the BitShares community when graphene was announced last year, and further the impact implementing your new design will affect existing Steemit apps and efforts which are far more numerous in this community.

@dan, you are an amazing tech wizard and I'm sure your analysis is sound, and I look forward to an updated roadmap and additional details. Despite that I feel uneasy as this announcement seems so similar to last June.

This applied to bitshares as well. Steem currently takes 40 minutes to reindex after 9 months and less than 1% of target usage.

Don't worry, we have a much smoother migration plan than BTS 0.x to 2.0.

This whole announcement is the biggest thing since ever and thank you for your input @full-steem-ahead!

That sounds almost too good to be true! (although I have faith you guys on going to do it)

But what will the sacrifice be to this new design? There are always trade-offs. Hopefully security is fully maintained, preserved, and properly prioritized. Trust in the system is paramount.

It would be as secure (and decentralized) as the current Steem blockchain.

This is a great focus. Really excited to hear this! Any good articles that I can read about for such an implementation? Or at least, part of the inspiration.

We are borrowing design patterns from CPU and GPU arch.

That last sentence...

If Dan is excited... I'm excited! :)

This type of announcement is a "must have" around Dec. 6.
WP, sir!

How this blockchain could win in today's competition?
Scalability, private tokens, zero transaction fee, super fast transactions. I studied blockchain projects, and I didn't find one that is good enough after I'd thought what possibilities lie in Steem.

(I've also forgotten rapid development. Once, I've thought other chains were the best, and Steem's future would be the same if it cannot evolve extremely fast.)

Simply said: WOW! This is such an exciting news, especially considering the upcoming second tidal wave of users... Thanks a bunch for the information, namaste :)

Interesting. I look forward to seeing how you have solved this problem - assuming I can understand it:)

This is part of why I want to try to build one of my projects on blockchain.

We are in the process of documenting our design and producing a roadmap for its development and deployment.

2017 is going to be the year of Steem, I'm more excited for the platform then ever :) Thanks for the update Dan!

"This week our team has come up with a design and roadmap for scaling Steem to the speeds required to handle as many users and transactions as your all can throw at us. "

Wow! Amazing. Can't wait to hear more about it. Congrats on the new development!

Awesome. An announcement like this may definitely help keep people from dumping once the fork shortens the power down time. I'm excited to see where Steem will go!

Interesting information!

Looking forward to seeing the team's designs!

Hmmmm interesting times ahead indeed :)

I can honestly say I have never been more excited about the potential for decentralized blockchains scaling to handle operations the size of Reddit and Facebook.

Thanks a lot for all the hard work @dantheman and as @jrcornel good says

If Dan is excited... I'm excited! :)

I cannot believe that I can absorb this seemingly technical article and know why dan is excited. Resteemed.
I will be interested to translate this post into Chinese, if dan sees this message and agree, then I will do it. Otherwise, I will just be a happy audience. :)

Wow! This is huge! No pun intended.

Cool, thumbs up. Good luck marketing well with @ned etc so you indeed take advantage of all that extra future scaling ability! ;-)

Really looking forward to learning more about the roadmap and future design. 2017 is going to be exciting as hell. Steem on!

Not gonna lie, I don't understand all this stuff but it is fun reading about it and posting stuff here.

My meme making game has been upped since I joined here.

Does that count for anything ?? lol

#MinnowLife

STEEM on brotha!

This will boost the popularity of Steemit and attract more users.

This is indeed intriguing and exciting... As an inventor, I absolutely love to hear about new approaches to solving old problems. Can't wait to read about how you'll accomplish this! :)

That is great news @Dan and Steemit team! It is really good to see that the team is looking constantly for improvements to make Steemit and Steem future proof and scalable.

I think part of the way to solve this lies in some kind of notification subscription system built with a light caching node that keeps track of what data in the older parts of the tree are being transacted with, perhaps even just holding a live log that keeps track of which items are potentially going to be written to.

The light caching nodes are interface nodes that can be run in smaller storage volumes that keep data synchronised locally relevant to the users going through them, maybe something built with anycast addresses that clients prefer short routes to a closer node, breaking the edge of the network into lower latency hops with usually all relevant data the users are interacting with.

By adding these two features the network knows which branches to keep available and more quickly propagate new data to users interested in it.

Blockchains are linear but they consist of many branched parts, with also even multiple overlapping maps across user accounts and there is an associativity between smart contracts as well.

I probably am not really adding much to the discussion though. This is just applying distributed availability optimisation by using a subscription model based on interface library event loops. By reducing the propagation of data when it is not demanded it frees time to process more data in parallel.

Dear CTO,

Resteemed and Upvoted. Excellent post! Thanks!


0.1 Steem was sent to CTO.

I look forward to seeing your solution, and I have no doubt that scaling to the volume needed by Steem in the future is possible. Bitcoiners arguing about 5 tps are embarrassing themselves while forward-looking projects are working on megascaling: Ethereum, Synereo, Maidsafe, Tezos. Do you follow Dan Hughes' experiments for emunie? There seems to be little synchronization overhead in his design.

That thread is so focused on network propagation that they have completely lost sight of computational limits.

They described a system of 1000 chains processing 1000 tps. That is all fine and dandy, but how fast does one chain grow and what are its replay time assuming a local Copy is already downloaded.

What are your thoughts on other consensus mechanisms like a tangle that is more general (in terms of transactions) than a blockchain?

Tangle adds computational load and doesn't help much.

Computational load in what sense?

If the transactions contain the state then tangle can be processed in parallel efficiently, but if the transactions imply the state then you must process all transactions to build the state.

Bitcoin transactions "contain the state", Steem transactions "imply the state".

Anything that has complex state cannot be easily represented in a tangle while keeping performance.

How I understand the tangle is that each transaction is a unique vertex in the directed acyclic graph. Instead of a block that contains multiple transactions with a corresponding hash of the transactions in a block, you have a path to a source that gives the initial creation of the token, i.e. from an ICO or from mining (if that is done with the tangle you're using).

A tip (or equivalently a sink in the DAG) is confirmed based on the consensus model (some type of max sum that aggregates all the weight-based paths).

I would think that you can process this in parallel efficiently, by stopping at each vertex that has multiple transaction outputs and waiting for each path to merge back to it, by first starting at the tips and tracing the paths backwards to the source.

Granted, I won't claim to know anything about tangles other than fascination and a desire to learn and understand alternative consensus models.

But, this is my vague high-level overview of what's happening.

Forward thinking. Advanced planning. Taking hypotheticals and putting them into action. Breaking glass ceilings. What I read here is a clear potential for growth and sustainability which is very exciting. We're gonna kick FB's ass! Yeah, baby!

This post has been linked to from another place on Steem.

Learn more about and upvote to support linkback bot v0.5. Flag this comment if you don't want the bot to continue posting linkbacks for your posts.

Built by @ontofractal

thoughtfull article - makes me think about challenges (hurts my head)