As far as Steemit is concerned, Delegated Proof-of-Stake will always have scaling problems, no matter how much RAM you throw at it. All you're doing is making sure the end-user node requirements increase to the point where no reasonable machine can sustain it. (Without being located in server racks at a co-location facility.)
Can you elaborate on this? What scaling concerns are you referring to? Are you also saying that PoW blockchains don’t have these concerns too?
I think there is a (small!) bit of a valid point here in that nobody really claims PoW blockchains scale well (okay maybe the extreme big blocker faction?). PoW is trying to do something (permissionless system, nodes on non-dedicated hardware, objective global consensus, etc.) that isn't really focused on scaling as a primary design goal.
However, by contrast DPoS is very much built around a premise of scaling and projects (and Graphene in particular) have a history of making extreme claims about scalability (such as 100K TPS or higher, claims of being able to support Reddit-scale in the Steem white paper, etc.) which are often narrowly made to the point of being misleading IMO.
CPU-wise it might be valid to throw around these bold claims, but can you imagine how fast the blockchain would grow and how unmanageable it would quickly (!) become if someone actually tried to do this? (Not to mention other issues such as bandwidth, etc.)
So measured against the claims, I think it is fair to say that DPoS has more scaling 'problems' because its claims and ambitions are much higher.
I agree with this.
I didn't say that PoW didn't have scaling concerns - I'm just VERY concerned that a RAM allocation tweak is the path that DPoS is going, knowing that Proof-of-Stake has its own unique problems.
But you know what, I don't have to go down that rabbit hole -- if Steemit is still functioning a year from now with a larger user base, then I'll be wrong -- otherwise, I see this as a step down the path to ruin.
I wouldn’t characterize this as a RAM allocation tweak. It is more of a change to not use RAM, and instead use something that is more scalable: SSD.
Unfortunately that won't take things very far. I slightly disagree with the post about 16 GB witness nodes even with fast SSD (and especially 8 GB). I've tried to do this (and I still have one running) and it is very painful in practice. When the memory usage doubles then 32 GB nodes will become likewise painful.
But even if you disagree with this, it is still clear that it is going to become a problem at 3x or 4x or so. SSD is really not that much more scalable than RAM given how things work now, rather only a little (using an actual database has the potential to improve that, but that seems somewhat far away).
I don’t know enough about the inner workings to know whether this is true or not.
I obviously trust your judgment on these types of things (more than mine) but I am also not 100% convinced that it wouldn’t work to have the shared memory file grow to several hundred GB (or more), and still only require 16-32 GB of RAM.
it may work, but it will just be extremely slow, not only full replay but even syncing a few days of blockchain after downtime. This is already the case at 30-something GB state with 16 GB RAM and 2x NVMe RAID0. I'm not really sure what they are thinking when they claim 8 GB is still usable. It certainly isn't for me, I was very seriously considering dumping the 16 GB, but i haven't yet.
One of the main concerns that the post was trying to address was that the requirements for witness and seed nodes had already reached 64 GB of RAM and were on their way to 128 and continuing to grow.
The belief that the state file should be stored in /dev/shm/, and that it was best to either rely on swapping or having enough RAM to hold the entire state file was a large part of this misconception.
You are correct that at some point in the future, if we continue to grow without making any changes, we would reach a point that 16 GB servers, and even eventually 32 GB would not be enough, but many of the changes we are working on (such as AppBase and RocksDB) are intended to address this long before we reach that point.
@andarchy
So essentially the suggestion is, only during the initial indexing the files needs to be read quickly and from that point onwards only the last blocks will need to be in the RAM for faster I/O ?
In otherwords, once the reindexing is over, which is CPU driven and IO driven, the "tail end of the blockchain" is what that gets I/O and rest need not be in memory.
If this is the case, we need to carefully pageout the older parts of the blockchain from the memory and only the new (tail end) needs to be in memory.
Does this make sense ?
Indeed @timcliff you are correct - disk space is relatively inexpensive while RAM is much more so. By using other services to reduce our 'full nodes' to something more like a 'consensus node', we are making it possible to easily and cost effectively scale.
This is an excellent point and a more succinct and clear way of expressing most of what the post was trying to say.
Oh no? Then this statement:
Certainly isn't congruent at all.
Here's the thing, you're taking one strategy "Put it all in RAM" and substituting "Put it all on SSD and page INTO memory", which doesn't address why you need to do the above in the first place.
The reason you have to split things into "modules" and try to even out the load is that your full nodes using delegated-proof-of-stake will not scale any further without "further optimizations" which right now consists of "offload the stress to cheaper/slower IO device".
Its very similar to the strategies that Ethereum is trying to solve, because their blockchain is bloating way too fast (lots of blocks filled with all kinds of junk, like cryptocats - the faddish pokemon ripoff).
All I'm seeing here is a fire-drill response that will result in short-term relief, but hasn't addressed the fundamental problems that exist.
Its okay, with the retention metrics being consistently crap - I think the only demand you are feeding is that of the bot armies that are diligently sucking the Reward Pool dry.
Its a bit like upgrading your email server because you have a lot of spammers hammering it. It doesn't help anything, because the root problem hasn't been solved.
Sorry, but I think you are misunderstanding some of the technical aspects here. There are two different things being discussed.
If you are talking about DPoS - then you are talking about consensus nodes. These are the nodes that keep the blockchain state updated, and ensure that all of the new blocks are 'valid'. These nodes are covered in the "witness and seed node" section. Everything that is required for the DPoS portion of Steem to run is contained in these nodes, and the post was very clear that these nodes do not require the state file to be stored in RAM.
The part that you are quoting is talking about "full nodes" which get into API calls. API calls are an application layer built on top of the blockchain consensus rules. These nodes require more RAM because of the way the code is currently implemented, but eventually (as stated in the post), this logic for all of the non-consensus API methods will be handled separately - through things like HiveMind, SBDS, and RocksDB.
If your concern is rewards pool abuse and spam - those are valid concerns, but they are not going to be resolved in the context of "addressing the scaling requirements of the blockchain". Fixing spam and abuse issues might slow down the growth - but the ever increasing growth of the blockchain problem will still always be there - so scaling is something that still needs to be addressed regardless of what is/isn't done to address spam and abuse.
It seems we've reached the TTL for this convo, which is fine.
See you in a year, if this is even still around. Then we'll talk.