32GB currently, but that would flawlessly only with decent speed on storage backend (a lot of I/O has to be handled)
You are viewing a single comment's thread from:
32GB currently, but that would flawlessly only with decent speed on storage backend (a lot of I/O has to be handled)
Thank you. We are trying to understand the discrepancy between what you're saying at what @furion is saying here:
https://steemit.com/steem/@furion/updates-on-steem-python-steemdata-and-the-node-situation
Can you help any further, to put this issue to bed, so I can keep investing my time in this great project?
As far as I can see, @furion seems to be satisfied with my node. Of course funding clusters of such ( or even better - much bigger ) nodes is always welcome.
It was the 128GB versus 32GB RAM difference that I was getting at, but now you've explained that you use SSD drives for swap it reconciles the apparent discrepancy for me. Thanks.
Ah, I see. Everything depends on exact environment. Low latency is crucial especially at initial reindexing. For example my 64GB backup API node performs much slower (at least at reindex) because it has a single SSD drive.
Oh right. I've been considering setting up a witness, but would only be interested in a full RPC node, as I'm doing development with RPC ( @steemreports).
Given that I'd need to do it remotely, so won't have access to the hardware, is there a suitable provider and server you could recommend? I getting the impression a VPS isn't going cut it for a number of reasons.
I'd very much appreciate any pointers.
Please note that a full RPC node and a witness node are two separate things and shouldn't be ever mixed.
As for VPS it might be viable as long as you have enough amount of dedicated resources (i.e. IOPS) guaranteed.
i3.2xlarge
on AWS should do the thing.Performance might vary a lot between providers despite same parameters on labels, also you need to figure out yourself what would be more suitable and cost effective - more RAM or faster storage.
Thanks for this...
So if steemd is run with 'USE_FULL_WEB_NODE', it doesn't make blocks? or do you mean you just wouldn't broadcast your witness intent, because the node would not realistically have enough resources for both tasks?
So it's only running with 'USE_FULL_WEB_NODE' that has these challenging resource requirements, and if there's any problem, it's because the reward structure only pays for the blocks, and not RPC requests, hence the difficulty with economic incentives to provide the RPC nodes.
Am I on the right track now?
An architectural diagram or description of all this would be really helpful, otherwise I'm left continuing to guess about this, are there any links to these kind of resources?