It’s been a while since I last posted a progress update, so I apologize for the delay.
In the early part of August, I devoted some time to looking for potential bottlenecks in our API node infrastructure, making improvements, then analyzing the results. The positive results of that work were discussed in Hive core developer meeting #27, so I won’t repeat the details today, but we did see very substantial gains in API response times. I may make a separate post in the future to show some of the gains across various API calls.
In this post I will focus mainly on some of the coding work done in the past two weeks by the BlockTrades team. As a side note, a lot of smaller tasks were completed during this period, including internal infrastructure improvements (i.e. adding and configuring new servers, gitlab is now hosted on a faster server), testing improvements, etc. that I won’t go into detail about here.
Hived work (blockchain node software)
As mentioned in the Hive dev meeting, we reduced hived’s memory usage from 20GB to 16GB. As part of this work, we also did some code cleanup and minor efficiency improvements:
https://gitlab.syncad.com/hive/hive/-/merge_requests/248
https://gitlab.syncad.com/hive/hive/-/merge_requests/247
We also completed the refactor of the command-line-interface wallet to reduce the amount of work required when new API methods are added to hived:
https://gitlab.syncad.com/hive/hive/-/merge_requests/170
Speeding up sql_serializer plugin that writes to HAF database
Most of the recent work in hived has focused on the sql_serializer plugin. This plugin fills a HAF database with the blockchain data used by Hive applications, so it is fundamental to our plan for a scalable 2nd layer app ecosystem.
Since the serializer provides data to HAF apps, the speed at which it can transfer blockchain data to a postgres database sets an upper limit on how fast a new HAF server can be initialized from scratch. Based on benchmarks ran last night, we’ve reduced this time down to where the serializer can write all 50M+ blocks of Hive to a postgres database and initialize the associated table indexes in 7 hours (on a fast machine). That’s more than 2x faster than our previous time for this task.
As impressive as those times are, we should still be able to dramatically reduce this time in many cases, by enabling an option for the serializer to filter out operations that aren’t interesting to the Hive apps hosted on a specific HAF server. For example, a standard social media application such as Hivemind doesn’t need to process most of the blockchain’s custom_json operations, so a lightweight hivemind node could configure its serializer to use a regular expression to capture only the operations it supports.
Hivemind (2nd layer applications + social media middleware)
bug fixes
Fix to community pagination with pinned posts: https://gitlab.syncad.com/hive/hivemind/-/merge_requests/496
Only allow referencing permlinks that haven’t been deleted:
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/494
Fixes related to improperly versioned package dependencies:
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/498
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/504
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/506
Restore ctrl-c breaking and sync from block_log database with mock data (part of testing work for HAF):
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/508
Hivemind optimizations
Improve query planning under postgres 12 (postgres 12 is shipped with Ubuntu 20, so we’re planning to make 12 the recommended version for HAF and hivemind):
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/505
Speedup of post-massive sync cleanup phase:
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/507
Improved storage management during massive reputation data processing to speedup massive sync:
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/509
After consulting with Hive dev apps to be sure no one used it, we are going to eliminate the “active” field from the response returned by some post-related Hive API calls.
Hive Application Framework (HAF)
We’re currently examining alternatives for injecting “computed data” into a HAF database. As a specific example, currently hived computes “impacted accounts” which are a set of accounts affected by each blockchain operation. Many HAF apps will also need information about impacted accounts. In theory, there are three ways to handle this: 1) have the sql_serializer write the data computed by hived directly to the database, 2) re-use the code from hived in the form of a postgres c++ extension, and 3) have HAF apps recompute this data themselves. Option 3 seems pretty wasteful, so we’re mostly looking at options 1 and 2. Personally, I favor option 1, but we’re trying out option 2 now, to see how it works out.
HAF-based Account history app
We’ve completed our first example app for HAF. This app replaces the functionality of a hived account history node. This means that future Hive API nodes will be able to operate with a consensus hived node instead of needed to run a heavier weight hived node configured with the account history plugin. The app is located here: https://gitlab.syncad.com/hive/HAfAH/-/commits/develop
Use of this app should also result in much more scalable and responsive account history API calls (currently these are some of the biggest bottlenecks in terms of API performance). I will probably have some benchmarks for this by next progress report.
HAF-based Hivemind app
We had to make further updates to hivemind to get it to work from tables similar to those used by HAF (we’d done this previously, but later changes to hivemind had to be accounted for). Yesterday we completed a successful massive sync with this new quasi-HAF version of hivemind.
What’s next?
In the coming week we’ll be focused on the following tasks:
- conversion of hivemind into a HAF app
- testing and benchmarking of HAF account history app
- continued tuning of HAF code based on above work
- begin planning for HF26
Great work!! This is very exciting stuff.
Any update on RC delegations? Last I heard (May?), that was close-ish to ready to implement and didn’t need a hard fork.
@howo has committed a preliminary implementation, and I think he's waiting for someone to review it, IIRC. We've just been super busy lately.
Yes it's the current status
good to know :) i think it will change the worth of Rcs a lot for a secondary market :)
Oh, I bet you have been busy. Thanks for the reply. I wasn’t sure if I missed an announcement about it somewhere else. Thanks.
Ad TLOS ...and BSC .... and LEO and other hive engine tokens !!!
@blocktrades Is there a PR or branch available?
https://gitlab.syncad.com/hive/hive/-/merge_requests/245
This was a productive month! Cheers.
I will be posting something about HBD in two days, I believe it's important.
Keep an eye for my post if you can, I would really like you to read it.
sure, I will
HIVE!D
@chrisrice, @kencode
fantastic updates!
@tipu curate
Congratulations @blocktrades! Your post has been a top performer on the Hive blockchain and you have been rewarded with the following badge:
You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word
STOP
쳇! 고대 한국어로 우리에게 말하고 있는 단은 지금 어떻게 되었습니까?
@darthknight 這正是我所做的。 但這似乎是一種非常古老的韓國方言。};)
Fake it till we make it!
wen mass adoption ? :P
Check out @penguinpablo's posts for new accounts, it's happening.
we have a long ways to go but things are looking good at the moment. I was just clowning around btw :)
hive cant handle the millions a day we ned unless those people each buy 3 hive hah and then you have to sell hive as a lifetime membership to the blockchain.
even at $100 hive 3.5 hive $350 to join a private money banker club is pretty dope if you can get a piece of the reward pool or join communities and earn hive engine tokens
We would just rent out resource credits :P also I think blocktrades has more work to do until the blockchain is ready for those numbers
where do u delegate RC colas in muh wallet?
shasta cola... telos
I think the recent pump is getting to you :)))
lite wallets? u mean dlease delegation?
hive has hard limit of new accounts doesnt it? u cant only make a few million a year?
shooow me the math money
have no idea on the account limit but we may find out :P
Yeah I know lol. It's just that account sign ups are exponential and I like that
Thank you for the updates. Keep up the valuable work for Hive!
I celebrate as a triumph every project that goes in favor of the improvements of the HIVE platform, congratulations and success...
Hi this @armen, we played a chess game a while back on steem it. Im back as @lacausa just stoped by to say hi and as always great work.
Welcome to Hive!
Nice work @blocktrades very informative
Great to be reading the updates. One thing I am dreaming about is a marketplace for all kinds of goods and services, built on Hive. Where we can finally pay for things (and offer things) without having to switch to fiat. Would be a dream come true. Any plans to create something like that?
I'm definitely interested in an app like that. There used to be an app on Steem for that called PeerHub. A couple of months back I talked to the guy who used to run it, to see if he would be interested in relaunching here. IIRC he showed some interest. But if that doesn't come to pass, I think we can build one from scratch without too much work using HAF.
Awesome. :)
Finally, something I can operate from home? Bookmarket to come back into gitlab.
The rewards earned on this comment will go directly to the person sharing the post on Twitter as long as they are registered with @poshtoken. Sign up at https://hiveposh.com.
Has the smart contract engine been pushed back @blocktrades ?