You are viewing a single comment's thread from:

RE: Need Help Contributing To Hive

in HiveDevslast year

So it is gatekeeping.
So this software did never scale, like I was told.

It was my first impulse to think this would never work if the blockchain (and therefore the tables) would grow. I was told that will never be an issue. By the same people who built hivemind for years.

I have looked at HAF and since that requires pretty pricy machines (you need to run your own node to start with), it can not be the right solution.
I always wondered: If I got a node running and that already has a form of DB, why not query that DB directly?

Anyways, we are back at the main problem: Lack of transparency, working behind closed doors. Little communication.

Sort:  

Steem had (and still has) enormous problem with scaling. In comparison Hive is light as a feather and a lot of people are working on making it even lighter. Computers are also getting better and cheaper all the time. On the other hand blockchains (not just Hive) are getting heavier. I hear that even the old grandma Bitcoin is over 400 GB today. The question is what is going to grow faster - the hardware or the data. So far hardware wins big time, I'm running node on my home computer and it doesn't interfere with my other activity in any way, my development machine can run multiple nodes in the same time.

I'm not running HAF though and last time I was running old Hivemind, the chain was couple million blocks lighter and I was doing that on separate server. I don't know enough about HAF, but my understanding is that if you are a service provider (you develop and maintain your own app on Hive), you will have two choices - either to run your own node with HAF parameterized down to the data your app actually needs, or to deploy it on some "full HAF" server that has all the data and can host many apps (I think the latter can only be sustainable long term if the operator of such node receives some compensation from the devs of apps hosted on such server... or from DHF).

HAF is heavy, but it is still in its infancy and a lot can still be done. I think it is yet to prove itself in regular production environment. There are two aspects - is it able to keep up with life sync even with bigger blocks (current mainnet data rate is really not challenging) and how queries are going to be affected when its size doubles or triples. Both issues are investigated and problems addressed, but I'm only observing it from sidelines, so that's where my knowledge on the topic ends. We are also making tools to be able to flood it with max sized blocks (I'm pretty confident that the node itself will not be overrun by blocks that are filled with 150-200 times more transactions than typical blocks we have on mainnet these days, but how HAF is going to react is anybody's guess).

Anyways, we are back at the main problem: Lack of transparency, working behind closed doors. Little communication.

People in IT are typically not known for great communication skills. We let the code talk 😜