You are viewing a single comment's thread from:

RE: HBD.Funder Spam Comments Earned 1,439,172 Hive in 2021 (and they only started on March 21st)

in LeoFinance3 years ago

You're quoting a theoretical amount based on the current value of Hive, but most of that is not spendable. There's about 10M HBD that is spendable now, and it's more limited than that, because only 1/100th of that 10M HBD can be spent per day. It's not a lot for the chain to spend with, compared to many other chains, unfortunately.

Sort:  

How much more is there to be done on the layer one?
I know layer two is unlimited, but how close is layer one to completion?

Software of this type rarely ever reaches a true state of completion, as long as there are creative people working on the software, because new ideas emerge for how to improve things. But here's a list of some ideas for future improvements to hived that I can think of offhand that I believe should be done.

fix locking problems in blockchain/p2p interface

We've known for some time that when Steemit re-used the p2p layer from BitShares in Steem, they didn't get the locking code done correctly. This is definitely on the list to get fixed. It doesn't cause any data errors, but it impacts hived performance. It can potentially cause a node to lose connectivity to peers if the blockchain thread gets tied up doing too much work.

improvements to p2p layer

Improvements to the performance and functionality of the p2p layer. While I think Hive has one of the best existing p2p layers among crypto projects, the work on it was stopped before it was ever optimized to its fullest. It was designed during BitShares 1.0 days and BitShares funding started running out, so we got moved from working on the p2p layer since it was "good enough" to helping out in other areas of the code that were suffering (for example, the BitShares code for resolving forking logic was broken and they needed some smart guys to fix it).

reduction of the storage footprint required to operate a hived node

There's a couple of interesting things that could be worked on here:

  • use of compression to reduce the size of the block_log
  • look into ways of operating "lite" nodes that don't need to keep all of the block_log locally. This could even include some kind of distributed and redundant storage between nodes where different nodes keep different blocks and share them as needed. In an extreme case, there could be very lite nodes that only keep new blocks.

speed ups to hived API performance

We're working on this now.

speedup block finality

We'll begin researching this soon, as it will allow for further improvement in the performance of HAF-based apps (HAF app performance is already very good, but faster is still better).

Hmmm, I take your point that 100k per day, 3.65 million per year, is on the low side considering the prevailing wages in the industry.

It seems, to me, that another year, maybe two, will be needed to clean up the things you've listed, so, I guess I can shelve this complaint for some time.

My place in the crab bucket requires that I maximize what goes into the newb attraction pool.
Nothing personal, just business.

With any luck, hive will moon, and the class of 2016 can move on to other things.

Hi @antisocialist. One question, for now.

What is "The class of 2016?"

People that got here in 2016.
People with 5 digit, or less, account numbers?

So, me, for instance?

Yes, you would qualify.

good insights, I didn't know p2p interface comes from bitshares.