That's what I'm afraid. I was running full node on HDDs with decent RAID, and everything works fine as long as cache (on various levels) can keep up with load. But of course with much, much lower latency on SSDs it might be a viable solution. (HDD are useful because you can spot latency issues much earlier).
You are viewing a single comment's thread from:
Agree in full. Still I found such experiments are crucial for steemit scalability, and I'm very glad to see more people are thinking of it, as well as having 'organ' that is supposed to perform thinking ;)
The results above could be explanation of ESXI / Bare Metal differences, as it's likely that Ubuntu Drivers are not handling cache correctly.
It would be interesting to see if there would be any difference if I replace SSD with Mechanical SAS drives. (Probably not going to manage until the end of the week, but will update). But this will give full insight on SSD/HDD latency difference once the cache can't keep up with the data.
As for the memory consumption, I think that df -h is totally irrelevant here, as 'HotPlug Memory' and 'Reserve All' is configured at the VCenter. This is the VCenter output of memory usage of node. (1.6 GB Only). VCenter can be very mysterious with stats sometimes.
While I do think better SSD drives could possibly offer a good alternative to scale Steemit, I still put hopes in Stunnel -> HaProxy -> Stunnel -alike clustering (Just for the purpose of understanding). Following your idea of nodes with different plugins and examining tcp dumps, I found it possible to redirect the traffic based on transaction type.
Anyhow, experimentation and communication like this one is a very good signal for steemit future scalability and therefore sustainability.
Nice having you around.
Disclaimer: I don't hide my real motives, experimentations performed here are done both for the benefit for community but as well to expand my knowledge in BIG Data, as I do full time consultancy for a living.
Sounds good :-)
Take a look at jussi (JSON-RPC 2.0 Reverse Proxy).
Very Interesting for RPC (will definitely experiment), but it's P2P that is catching my toughs.
No, wrong way. Turn around. With current architecture we can't split that, and definitely you won't do that on a network level.
My psychiatrist told me the same ;)
Joke aside, i'll need to dig into it on network level in order to understand it. Even if i don't manage (Assuming I wont, as you seems like someone who already tried), I am surely to better understand the architecture and the payloads.