2nd update of 2024: Releasing the new HAF-based stack for Hive API nodes

in HiveDevs8 months ago (edited)

blocktrades update.png

Below are a few highlights of the Hive-related programming issues worked on by the BlockTrades team since my last report.

For the past two months, we’ve been testing and improving the various apps that make up the stack for Hive API nodes, as well as the infrastructure that supports it.

The big announcement is that today we’ve released 1.27.5rc8 (release candidate 8), which based on testing so far, will probably be re-tagged this week as the official release of the stack.

HAF API node

This repo contains docker compose scripts to easily install and manage a full Hive API node. It’s so simple to use that anyone who’s familiar with basic Linux system administration should be able to setup and operate their own Hive API node now. One of the best features of the new stack is that it easily allows API node operators to incrementally add support for new Hive APIs as new HAF apps get released.

Recent changes to the scripts include:

  • Supports using either Drone or Jussi for reverse proxying and caching of JSON-based API calls. Drone is now the recommended choice.
  • ZFS Snapshot script has an option for “public snapshots” to avoid including logs in the snapshot (e.g. when you want to supply a snapshot to someone else).
  • Fixed log rotation of postgres logs.
  • Caddy now overrides Jussi/drone handling of CORS
  • Fixed various issues with the assisted_startup.sh script.
  • Fixed various configuration problems found while testing on our production node.
  • The .env file now supports downloading pre-built docker images from gitlab, dockerhub, and registry.hive.blog.
  • Added a bind-mount point for caddy logging.
  • Fixes to healthcheck checks for btracker, hafbe, and hafah.
  • Configured haproxy to log healthcheck failures.
  • Fixes to btracker/hafbe uninstall processed.
  • Simplify exposing internal ports between services when spreading node services across several servers.

We also updated the CI processes for hive, HAF, and the various HAF apps so that whenever we manually tag a gitlab repo with a new release or release candidate, the associated docker images built by CI will also be tagged with the same tag, and then automatically pushed to dockerhub and registry.hive.blog.

Last night I used the jussi traffic analyzer tool to analyze API call performance on api.hive.blog (despite the name it also works for analyzing traffic when your node is configured to use Drone). We’re running api.hive.blog itself off a single server now (it used to be spread across 4 servers that we’d been using since Hive was first launched) and the average API call response time is about 2x what it was back in November of last year, despite handling about 50% more traffic than our old configuration. And our CPU load is also significantly lower, so we have substantial headroom for more load.

Note: the new server hardware is also somewhat faster than our old servers (at least compared to one individually) and has the same amount of disk storage as all 4 of the old servers had (4x2TB = 8TB). There’s less available memory (the new server has 128GB, the 4 old servers had 64GB each for a total of 256GB), but we’ve also significantly lowered memory requirements over time, so this isn’t a problem. In fact, currently an API node should be able to handle quite a lot of traffic with only 64GB of RAM.

HAF (Hive Application Framework)

We fixed several issues that could cause a HAF app’s state to get corrupted during shutdowns of HAF or HAF apps because of a missed block. The balance tracker app was particularly useful in helping us identify those problems because a single block of missed data would typically cause it to fail before too long with a divide-by-zero error. As part of these changes, we also simplified the usage of the HAF API for creating HAF-based apps.

We improved the stability of a number of CI tests and added more tests (e.g. tests to test filtering of operations during HAF replays). And we found and fixed an interaction bug between the fc scheduling library used by hived and the “faketime” tool we use in CI testing that could result in busy looping. Fixing this bug significantly reduced the loading on our test runners and, more importantly, eliminated several intermittent test fails that were occurring because tests were sometimes consuming 100% of the CPU cores on our test runners.

We shrunk the size of the HAF docker image used by HAF API nodes from 3120 MB down to 611 MB to speed up downloads during installation (the 3120MB version contains all tools needed for development of HAF apps).

HAF Block Explorer and Balance Tracker APIs

We updated these apps to accommodate the changes to the HAF API.

We also added some new API calls and fixed bugs found in existing API calls while testing the Block Explorer UI.

And we added caching headers that provide varnish with hints about how long to cache API responses based on the API call type and the API parameters specified.

Drone (replacement for Jussi)

A while back, @deathwing created and deployed a reverse-proxying program called Drone that could potentially replace Jussi. For the past couple of months we’ve been working on completing various changes necessary to allow us to replace Jussi in the standard API node stack. We made a bunch of small changes, but the major changes we made were improvements to Drone’s caching system, and at this point its caching performance is as flexible and more performant than Jussi’s.

Hivemind API (social media API)

We dramatically improved the performance of several slow queries in hivemind that we discovered while testing the stack in a production environment. These improvements also reduce database loading on API nodes. There’s still a couple queries that need improvement, but we can easily deploy more improvements later as none of the changes will require a replay of hivemind.

Hivemind was updated to accommodate the changes to the HAF API. We also improved the install and update code, including removing some obsolete code, and speeded up restarting hivemind after a temporary shutdown.

API node update timeline

We expect API nodes to begin updating to the new rc8 stack over the next few weeks. We’re running rc8 now on api.hive.blog.

Near the end of this upcoming week, we will upload a ZFS snapshot with already replayed versions of HAF and the HAF apps such as hivemind. This will particularly be helpful for nodes with slower processors, but even on fast servers it is likely to be faster to download such a snapshot than do a local replay. However, nodes that have already replayed rc7 should be able to upgrade to rc8 relatively painlessly, and that is the recommended procedure for such nodes (only hafbe, if previously installed, will need a replay).

What’s next?

Since I think rc8 will be the official release of the stack, we’re now starting to turn our attention to future development plans. Our plans still aren’t fully formed, but here’s a sample of things we’re already looking at:

  • Begin next phase development of the HAF-based smart contract environment.
  • Create a HAF app to support creation and maintenance of “lite” accounts at the 2nd layer.
  • Create a “lite node” version of hived that doesn’t require local storage of a full block_log. This will also reduce storage needs of HAF apps and Hive API nodes.
  • Further improvements to the “sql_serializer” plugin that takes blockchain data from hived and adds it to a HAF database.
  • A long due overhaul of Hivemind code to bring its design architecture more in alignment with HAF-based coding practices. This should improve both replay time and API server performance and make the code easier to maintain.
  • Continue refining the new WAX library for developing light weight Hive apps
  • Finish up the new command-line wallet (Clive)
  • Finish up the GUI for the HAF-based block explorer
  • Finish up Denser (replacement for Condenser social media app)
  • Finish extracting reputation API from hivemind into a separate HAF app
  • Update and reorganize documentation of just about everything
Sort:  

Great! The recent attack showed the bottleneck of Hive decentralisation - too many apps and functions relied on a single tool.

Yes, I agree.

Actually @mahdiyari created an "api clone" for hivesql that should mitigate future problems if some of the API nodes deploy it. We haven't made it a part of the default stack yet as requires more disk space and initial installation wasn't totally smooth yet (but there were no problems that can't be worked around).

I'm glad to hear there's some progress :)

The performance improvements sound impressive. If that means more people can run the infrastructure then Hive will be more robust. Great work.

Yes, since Hive started that's always been one of our key goals.

I would be eager to see a data economy grow out of this new API ecosystem, would be good for the dev community to be able to rely on a new tool to sustain their starting apps efforts in a trustless way
Samples of this kind of economies:
-Subquery
https://subquery.network/home
The graph
https://thegraph.com

@blocktrades Hello, my brother, have a good day. I hope the update is worth the effort that I did. I have another question. When is the Lallana website created? It always receives an error message. https://blocktrades.us/ Error code: 503
Exchange temporarily unavailable

Congratulations @blocktrades! You have completed the following achievement on the Hive blockchain And have been rewarded with New badge(s)

You got more than 17000 replies.
Your next target is to reach 17500 replies.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

To support your work, I also upvoted your post!

A lot of effort! That's progress! Keep going!

You all! Just want to say: Thank you and <3!

 8 months ago  

3120 GB down to 611 GB

I suppose you meant MB

Great to hear about performance upgrades.

Thanks keep it up nice work

Your post is very good, I found it very interesting

Create a HAF app to support creation and maintenance of “lite” accounts at the 2nd layer.

Would the "lite" accounts be those created by third parties to generate the key? Or similar to InLeo through social platforms (X, Gmail, Facebook and others).

To create a lite account, you would create your own key and "register" it in the blockchain with an associated "human-readable" name. Any existing hive account including "app" accounts will be able to register such lite accounts with custom_json operations. Then 2nd layer apps can "recognize" the registered lite account and can "co-sign" transactions for it on the hive blockchain.

We'll provide a HAF app that supports the API for tracking lite account registrations and probably some other services as well.

It's very interesting, I can't wait to see it.

Weldone 👍

Nice, I'll be doing things the old school 'hard' way by not using the new API setup methods as iirc it expects ZFS to be used and would break if it isn't? Is that still the case?

Either way I've a good workflow for setting everything up the old fashion way should be rather painless.

It doesn't require ZFS to use haf_api_node, so I strongly recommend you use the stack, even if you don't use ZFS. One of our internal systems doesn't have ZFS installed and it still uses the stack without any problems. It just can't use the scripts for taking snapshots and creating the initial directory hierarchy.

Sweet, I'll give it a go then!

Congratulations @blocktrades! Your post has been a top performer on the Hive blockchain and you have been rewarded with this rare badge

Post with the highest payout of the day.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

@blocktrades i think this program update.. is very useful for everyone... Thanks you so much 😘 for everything 💗

a single block of missed data would typically cause it to fail before too long with a divide-by-zero error

Ran into this issue myself a few days ago when Docker Compose recreated the base container for no good reason when starting an app. Have to explicitly specify --no-recreate every time I guess.

Update time again it is.

Good news is rc8 seems to have resolved all the issues associated with this.

Thank you for your witness vote!
Have a !BEER on me!
To Opt-Out of my witness beer program just comment STOP below

Thank you for your witness vote!
Have a !BEER on me!
To Opt-Out of my witness beer program just comment STOP below

Thank you for your witness vote!
Have a !BEER on me!
To Opt-Out of my witness beer program just comment STOP below

Good job. I hope that smart contracts soon will be a part of the hive environment.

How can I get your upvote .. with my hard work... But i respect you all time ❣️😚

As a beginner to Hive, posts like this are so helpful! Thank you for the informative update @blocktrades

Congratulations @blocktrades! Your post has been a top performer on the Hive blockchain and you have been rewarded with this rare badge

Post with the highest payout of the week.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Are we still ahead of the curve technologically or web3-culturally, if that makes sense? It's taken years to develop this place and I'm pretty confident there's great reasons for that despite my lack of technical knowledge, but I assume we're getting a super robust, secure and government-proof entity as a result.

But with new tech developing every day, are we still cutting edge? (to be clear I don't think this in itself is even that important, but still, interesting thought)

In terms of tech in use, I think we're ahead of the curve for blockchain technology.

Great💐💐

Out of curiosity - do you rent bare-bone servers or colocate your hardware somewhere?

We have some servers at our own premises plus we rent bare-bones servers.

well done brother

Is there a version of Denser anywhere that we can test yet? Or do I need to install my own version from Docker? Thanks

We run a copy locally, but there's not one that's publicly accessible yet as it's still too early (e.g. right now work is being done related to login). I've only played with the one that's directly accessible on our intranet, and right now I don't even know if there's a docker for it as I'm not too directly involved in the project.

Ah ok, thanks for the info. I can see there are docker setup instructions in the repo, so maybe it can be attempted. I am about to start work on some analysis/design on the SPK/Ecency fork to see if we can optimise/improve the UX. Was mainly curious to see if you guys had significantly altered the layout/design of (con)denser or if the changes were all under the hood.

By design, initial/layout tries to match condenser for the most part.

Ah ok, great, thanks for clarifying :)

Congratulations @blocktrades! You received a personal badge!

Happy Hive Birthday! You are on the Hive blockchain for 8 years!

You can view your badges on your board and compare yourself to others in the Ranking

Check out our last posts:

Hive Power Up Day - April 1st 2024
Happy Birthday to the Hive Blockchain