Well, yes, it was amazing growth in 2017 and it would be even greater in 2018.
And yes, I can see a point in time where many of random witnesses with home grown infrastructure would no longer be able to keep up with the growth. And that's OK, because we are aiming at something bigger than just a proof of concept that bunch of geeks can run in their garage.
Scalability is and would be a challenge and a constant battle. The key is to keep an eye on your enemy, never ever underestimate it and plan ahead of time to avoid being ambushed.
If we can see some problem on a horizon it's great, because then we have a time to prepare ourselves and react accordingly.
I took a part in many discussions about scalability last week and I'm sure we can handle what is comming for the next few months.
And then?
By that time we will be ready things that we are not ready now.
And so on, and so on...
You are viewing a single comment's thread from:
This might be a silly newbie question, but why would one need to store the entire blockchain, apart from those hosting a big DApp like steemit that needs fast searches?
Can't there be some kind of work-sharing (with some redundancy of course), where you store a chunk of the chain in a deterministically computable way so that users know whom to ask for a specific information?
Splitting blockchain (by block ranges) wouldn't have much much sense because it would be very hard to ask for useful information. However, we are moving towards fabrics and microservices.
Unless the client who asks for the data is aware of the distribution scheme.
Doesn't matter. If only blocks are distributed then it's really ineffective to grab such data as "who follows user x". Knowing who can give you block ranges is irrelevant info.
Reindexing whole blockchain with tags plugin turned on will get that information for fast access at the run time.
It's in network best interest to have seeders, not leechers.
True. A more clever breakup may be by transaction type.
Ultimately it would even make sense to store monetary transactions in the main chain, text data in one side chain and big data (such as videos) in another.
(I lack culture on this, this is pure speculation)
Having smaller dataset may help having specialized seeders, meaning more seeders. Think of a seeder/cache node specialized in content written in one language.