3rd update of 2024: HAF API stack officially released, begin scoping of next release

in HiveDevs8 months ago (edited)

blocktrades update.png

Below are a few highlights of the Hive-related programming issues worked on by the BlockTrades team since my last report.

A new release of the Hive API stack (version 1.27.5)

The most important announcement is that we’ve tagged an official 1.27.5 release of the core API stack for Hive. This is the software run by Hive API nodes to provide data to almost all Hive apps.

This release is the biggest overhaul of the Hive software stack since Hive forked and represents 11 months work by the largest-yet team of core Hive developers (I didn’t calculate the man hours involved as the team has grown throughout that time, but it is a lot).

For the past couple of weeks since my last report, we’ve been testing and optimizing the HAF apps and associated infrastructure needed to run Hive API nodes. Throughout this process, the new stack has been rigorously tested on the production api node operated by the BlockTrades team (api.hive.blog) as we made various optimizations. We’ve also had feedback from several of the other API node operators who helped us test the various release candidates of the stack before the final release. So at this point I’m pretty confident in the reliability and performance of the new stack.

Long time readers of my blog are probably aware that one of the long term goals of our work has been to improve the scalability of the Hive software stack. This is because in many ways scalability is a key factor in the usefulness of blockchain software, because blockchains rely on the “network effect” for their usefulness, and when that network is limited in size by scalability, the potential usefulness of the blockchain itself is similarly limited. So it should probably come as no surprise that scalability was again a key goal of this release.

Scalability of the new stack and a useful testing benefit

It’s not always easy to measure scalability as there are many technical aspects of a software stacks implementation and real-world usage that can place limits on its scalability, but one useful metric for a Hive API node can be the number of servers required and the loading on those servers as they handle a certain amount of traffic loading.

In the case of api.hive.blog, which has traditionally received quite a lot of Hive’s API traffic, we used to have to distribute the software stack across 4 servers, each with 64GB of RAM and 2TB of nvme storage (so 256GB of total RAM and 8TB of total persistent storage). If this sounds like a lot of hardware, consider I was told that Steem was spending around $70K/month to maintain a fleet of servers performing the same function around the time Hive forked (bear in mind I don't know if this was also counting associated labor costs).

One server in our old stack was dedicated just to running hivemind (the api service responsible for supplying social media information to Hive web apps) and it was extremely heavily loaded, to the extent that it represented one of the two bottlenecks to the node’s overall performance, despite many optimizations made to hivemind in previous releases (to some extent because API loading of hivemind API calls from Hive apps also increased over time).

Another server that was heavily loaded was dedicated just to running a hived account history node to supply data about the transaction history of hive accounts (this was the 2nd API bottleneck). In the new stack, this functionality is replaced by the much more efficient and responsive HAF account history app (also known as hafah).

With the new stack, we’re able to run the entire stack, including hivemind and hafah, on a single server AND it is handling more traffic than our old stack did back in November. Despite the load doubling since that time, the new stack is considerably more responsive (API calls get answers 2x quicker on average and 70x or more faster in some cases) and the server is only lightly loaded (it’s only using about 1/6th of our single server’s available cpu power).

The new server has better hardware than the old servers (it costs about twice as much as one of the old servers, so we only cut our server costs in half), but the hardware we have is completely overkill for the load we’re handling (we have 128GB of RAM and 8TB of storage whereas our target “recommended minimum” for an API node is 64GB of RAM and 4TB of storage). Its so much overkill that we can easily run a second copy of the entire stack on the same server and still have plenty of room left over for other software.

This ability to run a 2nd stack on our production server is actually quite useful to us as developers, since it allows us to fire up new copies of the stack with software improvements while continuing to serve API traffic from the old stack in the meantime, then quickly switch back and forth as we test improvements to the new stack.

How to setup the new stack: HAF API node

The repo link above contains docker compose scripts to easily install and manage a full Hive API node. It’s so simple to use that anyone who’s familiar with basic Linux system administration should be able to setup and operate their own Hive API node now.

There are several options for how you can setup a node:

First, you can compile all the software yourself into docker images or you can fetch pre-built publicly available docker images that are automatically built and published by the CI system on gitlab.syncad.com.

Next, to fill your API node with Hive’s existing blockchain data, you can sync the blocks from hived nodes (the slowest method), replay the blockchain from a local block log file (probably about twice as fast if you have a local block log already), or even fetch an existing zfs snapshot with a pre-filled database (for most operators, this is likely the absolute fastest way to setup an api node unless you have really fast hardware).

Some replay benchmarks for Hive API nodes

On our fastest local systems (AMD7950X with 2 fast nvme drives) we can replay HAF itself in 10 hours replay + 4 hours indexing/clustering time = 14 hours. Then it takes another 48 hours to replay the various HAF apps in parallel (with hivemind being the critical path among the existing default set of HAF apps). So, in total 14 + 48 = 62 hours to fully replay a Hive API node (a little under 3 days).

However, our production server is a previous generation AMD system, so a full replay there probably takes around 21hrs + 80hrs = 101 hours (a little under 5 days).

I don’t have feedback yet on how long it takes someone to download the ZFS snapshot we’ve put up in the cloud yet, but if I recall correctly it took us around 2 days to download it locally, and then it only took about an hour for it to catch up to the current headblock. So for anyone with slower hardware (or even fast hardware), this is probably the fastest way at the moment to setup a Hive API node. We’ll probably make the snapshot available as a torrent at some point, which should allow it to be downloaded much quicker than 2 days (as more seeds become available).

Drone (replacement for Jussi)

We had several reports of problems with broadcast_transaction API calls intermittently getting 502 responses from our API node. With some experimentation, we traced the problem back to drone, so we ended up adding a lot of new optional logging capability to drone, and ultimately figured out that it was an issue where some excessively long response headers were causing a problem for caddy (drone sends the responses to caddy, which then sends the responses back to Hive web apps).

This problem didn’t occur for Jussi because it already had a feature that “shrunk” the problematic portion of the headers. So we updated drone to avoid forwarding the problematic portion of the headers to caddy (they were of no use to caddy or the client apps anyways as they were primarily intended to be logged locally for analyzing API call latency, caching efficiency, etc).

Currently we are using drone as a replacement for Jussi on api.hive.blog because it more responsive in the case of cache misses, requires less CPU usage, and we’ve optimized the caching algorithms to be more effective at caching when handling concurrent requests for the same data.

We haven’t yet made it the default choice in the haf_api_nodes scripts, but that will probably change quite soon.

Hivemind API (social media API)

We again dramatically improved the performance of several slow queries in hivemind that we discovered while testing the stack under production loads. The latest rounds of improvements speeded up commonly used queries related to fetching posts with a specific tag, and fetching follow and mute information (these changes are part of the official hivemind release).

We also identified a few remaining types of slow queries that we’re in the process of speeding up now (these upcoming changes will be trivially upgradeable).

Finally, work is currently underway to reduce the amount of logging done by hivemind servers under load.

HAF block explorer API and UI

As part of the official release we also corrected a few small issues reported with the HAF block explorer.

The block explorer UI is looking much better at this point, so hopefully we’ll find time to setup a publicly available web site for it before too long. Our API node is already providing API support for the HAF block explorer API, so if you run a local copy of the block explorer UI (e.g. using a pre-built docker image), you can point it to our API node and have your own personal block explorer web site while using very few computing resources.

What’s next?

On the design front, we started early scoping work on our goals for the next release in gitlab discussions and had our first group meeting today on the topic of lite accounts and an implementation of 2nd layer smart contracts.

The most fleshed out part of the plan so far is some of our ideas for supporting 2nd layer “lite accounts” (implemented via a HAF app). Some of the features to be provided are discussed here (although it doesn’t include some of newer ideas we discussed today): https://gitlab.syncad.com/hive/haf/-/issues/214

While we’re still discussing the details of what level of functionality will be available in the initial release of the lite account app, I expect we can release it relatively quickly as the implementation appears quite simple.

We also began very early discussions of potential implementations of a 2nd layer smart contract processor, but that’s a much more complicated topic with many points to research, so it’ll be several weeks at least before we have a full scoping of the initial release (in part because several of the designers involved are also actively working on existing projects such as HAF and hivemind improvements) and even that might be subject to change as we get into software prototyping.

Finally, its worth mentioning that the new stack is easily upgradeable and few changes require hardforks, so we'll likely be delivering incremental improvements to the stack at a fairly fast rate.

Sort:  

and few changes require hardforks, so we'll likely be delivering incremental improvements to the stack at a fairly fast rate

I was actually wondering recently about hardforks and the lack of them. I thought this might be the case. Much better for rapid instrumental releases

Yes, another of our main goals has been to make it easy to make useful changes to Hive without needing to change the core code much.

The analogy I like to use is an operating system and software that runs on it. Ideally you don't want the operating system having to change too often, because every time there is a change, there is a chance for new errors to get introduced and such low level errors can take down the entire system. So new functionality is better added via programs instead of via operating system updates.

Wow,so interesting.

Use DHF and put some adds in other places, to let the world know about this.

So smart contracts are gaining momentum, fun ahead.

Yes, we've actually been working on the HAF foundation for it for quite a while, but now we're moving into the final part of the design.

Hello and I am so happy 🥰🥰🥰🥰🥰🥰🥰🥰to be in your company and you who are working day and night for the development of the hive and I hope that the hive associations will become very powerful one day and that the whole world will use this association and everyone will know this great hive association. The hope of that day🥰🥰🥰🥰🥰🥰🥰🥰🥰🥰……

Wow so interesting,everyone will be happy,when he/she hear that the network is fully restored back,very smart.

The most fleshed out part of the plan so far is some of our ideas for supporting 2nd layer “lite accounts” (implemented via a HAF app).

That sounds interesting, especially since you said it would come "relatively quickly". :)

Let's also see how much they can do, these lite accounts. You already said that's something you guys still discuss for the initial release.

I consider Lite accounts to be one of the key feature for creating a highly decentralized 2nd layer for Hive. It will let Hive users interact with unrelated Hive and offchain apps with a single identity.

Will they at some point be able to interact with L1 via smart contracts, or do you believe L1 should be off-limits for L2 lite accounts?

By design, they will always remain unable to directly affect the 1st layer. This is an important safety feature for the first layer. But it will always be possible to "port" proven, desirable functionality from the 2nd layer to the 1st layer. And a 1st and 2nd layer account can be "linked" via a common private key.

And a 1st and 2nd layer account can be "linked" via a common private key.

That sounds interesting. I tried to think if there would be a use case for the common private key, other than linking the two accounts together. Since L2 can't influence L1, but vice versa could potentially happen, if L2 allows it, I wonder how that can be used.

Congratulations @blocktrades! Your post has been a top performer on the Hive blockchain and you have been rewarded with this rare badge

Post with the highest payout of the day.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Check out our last posts:

Feedback from the April Hive Power Up Day
Hive Power Up Month Challenge - March 2024 Winners List
Be ready for the May edition of the Hive Power Up Month!

Congratulations @blocktrades! Your post has been a top performer on the Hive blockchain and you have been rewarded with this rare badge

Post with the highest payout of the week.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Check out our last posts:

Feedback from the April Hive Power Up Day
Hive Power Up Month Challenge - March 2024 Winners List
Be ready for the May edition of the Hive Power Up Month!