Running Hivemind & HAfAH on HAF + Jussi - 2023

in HiveDevslast year (edited)

stolen video from gtg

Related post: How to run a HAF node


Table of contents:

  • PostgreSQL configs for hivemind
  • Hivemind on HAF
  • HAfAH (account history)
  • Jussi

I assume you are here after following the previous post.

The process is usually the same for production and development.

Preparing PostgreSQL

Add the following line to the file
/pg/workdir/haf-datadir/haf_postgresql_conf.d/custom_postgres.conf

hba_file = '/home/hived/datadir/haf_postgresql_conf.d/custom_pg_hba.conf'

Do NOT change the path! That path is for inside the docker container.

You might have to create the above file. That will allow us to override the default pg_hba.conf for PostgreSQL and allow authorization for user haf_admin.

Create the following file and add the following lines:
/pg/workdir/haf-datadir/haf_postgresql_conf.d/custom_pg_hba.conf

host haf_block_log haf_admin 172.0.0.0/8 trust
host haf_block_log haf_app_admin 172.0.0.0/8 trust

# DO NOT DISABLE!
# If you change this first entry you will need to make sure that the
# database superuser can access the database using some other method.
# Noninteractive access to all databases is required during automatic
# maintenance (custom daily cronjobs, replication, and similar tasks).
#
# Database administrative login by Unix domain socket
local   all             postgres                                peer

# TYPE  DATABASE        USER            ADDRESS                 METHOD

# "local" is for Unix domain socket connections only
local   all             all                                     peer
# IPv4 local connections:
host    all             all             127.0.0.1/32            md5
# IPv6 local connections:
host    all             all             ::1/128                 md5
# Allow replication connections from localhost, by a user with the
# replication privilege.
local   replication     all                                     peer
host    replication     all             127.0.0.1/32            md5
host    replication     all             ::1/128                 md5

This 👆 is the default pg_hba included in HAF with the addition of the first line.

Restart the HAF container and you should be good to go.

docker stop haf-instance
cd /pg/workdir
../haf/scripts/run_hived_img.sh registry.gitlab.syncad.com/hive/haf/instance:instance-v1.27.4.0 --name=haf-instance --data-dir=$(pwd)/haf-datadir --shared-file-dir=/dev/shm --detach

This will allow access to the database as haf_admin which is needed for preparing the database for hivemind. You can optionally remove only the first line afterwards.

Hivemind on HAF

Clone hivemind:

cd /pg
git clone https://gitlab.syncad.com/hive/hivemind
cd hivemind
git checkout v1.27.4.0.0
git submodule update --init --recursive

Building

cd /pg/workdir
../hivemind/scripts/ci/build_instance.sh v1.27.4.0.0 ../hivemind registry.gitlab.syncad.com/hive/hivemind/

Preparing the database

sudo apt install postgresql-client -y
../hivemind/scripts/setup_postgres.sh --postgres-url=postgresql://[email protected]/haf_block_log
../hivemind/scripts/setup_db.sh --postgres-url=postgresql://[email protected]/haf_block_log

Finally we can run the hivemind. You will have to start two instances one as a sync and another as a server instance to serve the APIs.

Sync:

../hivemind/scripts/run_instance.sh registry.gitlab.syncad.com/hive/hivemind/instance:v1.27.4.0.0 sync --database-url="postgresql://[email protected]:5432/haf_block_log" --docker-option=--detach --docker-option=--name=hivemind-sync

Check logs

docker logs hivemind-sync -f --tail 50

Server:

../hivemind/scripts/run_instance.sh registry.gitlab.syncad.com/hive/hivemind/instance:v1.27.4.0.0 server --database-url="postgresql://[email protected]:5432/haf_block_log" --docker-option=--detach --docker-option=--name=hivemind-server

Check logs

docker logs hivemind-server -f --tail 50

Hivemind is very slow at syncing and will probably take 3-4 days depending on your storage & CPU speed.


HAfAH - Account history on HAF

Running HAfAH is easy as it doesn't need to sync. It will use the data already present in HAF.

cd /pg
git clone https://gitlab.syncad.com/hive/HAfAH
cd HAfAH
git checkout v1.27.4.0.0
git submodule update --init --recursive

Building

# inside HAfAH folder
scripts/ci-helpers/build_instance.sh "v1.27.4.0.0" . registry.gitlab.syncad.com/hive/hafah --haf-postgres-url=postgresql://[email protected]:5432/haf_block_log

Running

docker run --rm -itd --name=hafah-instance registry.gitlab.syncad.com/hive/hafah/instance:instance-v1.27.4.0.0

Check logs

docker logs hafah-instance -f --tail 50

Jussi

Jussi is used as a proxy which exposes all the APIs and handles caching and timing out certain APIs.
First we need to get the IP address of hivemind-server and hafah-instance.

docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' haf-instance
# 172.17.0.2
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' hivemind-server
# 172.17.0.4
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' hafah-instance
# 172.17.0.5

Clone jussi

cd /pg
git clone https://gitlab.syncad.com/hive/jussi
cd jussi

Edit Dockerfile and put # in front of or remove the following at line 116

# RUN pipenv run pytest

If your IP addresses are different, replace them in the config below.
Add following lines in config.json

{
  "limits": { "accounts_blacklist": [""] },
  "upstreams": [
    {
      "name": "hived",
      "translate_to_appbase": true,
      "urls": [["hived", "http://172.17.0.2:8090"]],
      "ttls": [
        ["hived", 3],
        ["hived.login_api", -1],
        ["hived.network_broadcast_api", -1],
        ["hived.market_history_api", 1],
        ["hived.database_api", 3],
        ["hived.database_api.get_block", -2],
        ["hived.database_api.get_block_header", -2],
        ["hived.database_api.get_content", 1],
        ["hived.database_api.get_dynamic_global_properties", 1]
      ],
      "timeouts": [
        ["hived", 5],
        ["hived.network_broadcast_api", 0]
      ]
    },
    {
      "name": "appbase",
      "urls": [
        ["appbase.wallet_bridge_api", "http://172.17.0.2:8090"],
        ["appbase.condenser_api.get_account_reputations", "http://172.17.0.4:8080"],
        ["appbase.follow_api.get_account_reputations", "http://172.17.0.4:8080"],
        ["appbase.condenser_api.broadcast_transaction", "http://172.17.0.2:8090"],
        ["appbase.network_broadcast_api", "http://172.17.0.2:8090"],
        ["appbase.block_api.get_block", "http://172.17.0.2:8090"],
        ["appbase.block_api.get_block_range", "http://172.17.0.2:8090"],
        ["appbase.condenser_api.get_block", "http://172.17.0.2:8090"],
        ["appbase.condenser_api.get_accounts", "http://172.17.0.2:8090"],
        ["appbase.condenser_api.get_active_votes", "http://172.17.0.4:8080"],
        ["appbase.condenser_api.get_blog", "http://172.17.0.4:8080"],
        ["appbase.condenser_api.get_blog_entries", "http://172.17.0.4:8080"],
        [
          "appbase.condenser_api.get_comment_discussions_by_payout",
          "http://172.17.0.4:8080"
        ],
        ["appbase.condenser_api.get_content", "http://172.17.0.4:8080"],
        ["appbase.condenser_api.get_content_replies", "http://172.17.0.4:8080"],
        [
          "appbase.condenser_api.get_discussions_by_author_before_date",
          "http://172.17.0.4:8080"
        ],
        ["appbase.condenser_api.get_discussions_by_blog", "http://172.17.0.4:8080"],
        [
          "appbase.condenser_api.get_discussions_by_comments",
          "http://172.17.0.4:8080"
        ],
        ["appbase.condenser_api.get_discussions_by_created", "http://172.17.0.4:8080"],
        ["appbase.condenser_api.get_discussions_by_feed", "http://172.17.0.4:8080"],
        ["appbase.condenser_api.get_discussions_by_hot", "http://172.17.0.4:8080"],
        [
          "appbase.condenser_api.get_discussions_by_promoted",
          "http://172.17.0.4:8080"
        ],
        [
          "appbase.condenser_api.get_discussions_by_trending",
          "http://172.17.0.4:8080"
        ],
        ["appbase.condenser_api.get_follow_count", "http://172.17.0.4:8080"],
        ["appbase.condenser_api.get_followers", "http://172.17.0.4:8080"],
        ["appbase.condenser_api.get_following", "http://172.17.0.4:8080"],
        [
          "appbase.condenser_api.get_post_discussions_by_payout",
          "http://172.17.0.4:8080"
        ],
        ["appbase.condenser_api.get_reblogged_by", "http://172.17.0.4:8080"],
        ["appbase.condenser_api.get_replies_by_last_update", "http://172.17.0.4:8080"],
        ["appbase.condenser_api.get_trending_tags", "http://172.17.0.4:8080"],
        ["appbase.database_api.list_comments", "http://172.17.0.4:8080"],
        ["appbase.database_api.list_votes", "http://172.17.0.4:8080"],
        ["appbase.database_api.find_votes", "http://172.17.0.4:8080"],
        ["appbase.database_api.find_comments", "http://172.17.0.4:8080"],
        ["appbase.tags_api.get_discussion", "http://172.17.0.4:8080"],
        ["appbase.account_history_api", "http://172.17.0.5:6543"],
        ["account_history_api", "http://172.17.0.5:6543"],
        ["appbase.account_history_api.get_ops_in_block", "http://172.17.0.5:6543"],
        ["appbase.account_history_api.enum_virtual_ops", "http://172.17.0.5:6543"],
        ["appbase.account_history_api.get_transaction", "http://172.17.0.5:6543"],
        ["appbase.account_history_api.get_account_history", "http://172.17.0.5:6543"],
        ["condenser_api.get_ops_in_block", "http://172.17.0.5:6543"],
        ["condenser_api.get_transaction", "http://172.17.0.5:6543"],
        ["condenser_api.get_account_history", "http://172.17.0.5:6543"],
        ["appbase.condenser_api.get_ops_in_block", "http://172.17.0.5:6543"],
        ["appbase.condenser_api.get_transaction", "http://172.17.0.5:6543"],
        ["appbase.condenser_api.get_account_history", "http://172.17.0.5:6543"],
        ["database_api.get_account_history", "http://172.17.0.5:6543"],
        ["appbase.database_api.get_account_history", "http://172.17.0.5:6543"],
        ["appbase", "http://172.17.0.2:8090"]
      ],
      "ttls": [
        ["appbase", 1],
        ["appbase.block_api", -2],
        ["appbase.block_api.get_block_range", -1],
        ["appbase.database_api", 1],
        ["appbase.condenser_api.get_account_reputations", 3600],
        ["appbase.condenser_api.get_ticker", 1],
        ["appbase.condenser_api.get_accounts", 3],
        ["appbase.condenser_api.get_account_history", 3],
        ["appbase.condenser_api.get_content", 3],
        ["appbase.condenser_api.get_profile", 3],
        ["appbase.database_api.find_accounts", 3],
        ["appbase.condenser_api.get_dynamic_global_properties", 1],
        ["appbase.condenser_api.get_ops_in_block.params=[2889020,false]", 0],
        [
          "appbase.account_history_api.get_ops_in_block.params={\"block_num\":2889020,\"only_virtual\":false}",
          0
        ]
      ],
      "timeouts": [
        ["appbase", 5],
        ["appbase.network_broadcast_api", 0],
        ["appbase.condenser_api.broadcast_block", 0],
        ["appbase.condenser_api.broadcast_transaction", 0],
        ["appbase.condenser_api.get_ops_in_block.params=[2889020,false]", 20],
        [
          "appbase.account_history_api.get_ops_in_block.params={\"block_num\":2889020,\"only_virtual\":false}",
          20
        ],
        ["appbase.condenser_api.get_account_history", 20]
      ]
    },
    {
      "name": "hive",
      "translate_to_appbase": false,
      "urls": [["hive", "http://172.17.0.4:8080"]],
      "ttls": [["hive", -1]],
      "timeouts": [["hive", 30]]
    },
    {
      "name": "bridge",
      "translate_to_appbase": false,
      "urls": [["bridge", "http://172.17.0.4:8080"]],
      "ttls": [
        ["bridge", -1],
        ["bridge.get_discussion", 3],
        ["bridge.get_account_posts", 3],
        ["bridge.get_ranked_posts", 3],
        ["bridge.get_profile", 3],
        ["bridge.get_community", 3],
        ["bridge.get_post", 3],
        ["bridge.get_trending_topics", 3]
      ],
      "timeouts": [["bridge", 30]]
    }
  ]
}

Building

docker build -t jussi .

Running jussi at 0.0.0.0:80

# Run where config.json is located
# Don't change the path
docker run --rm -itd --env JUSSI_UPSTREAM_CONFIG_FILE=/app/config.json -v $(pwd)/config.json:/app/config.json -p 80:8080 --name jussi-instance jussi

To run in local network, replace 80:8080 with 127.0.0.1:80:8080

Check the logs

docker logs jussi-instance -f --tail 50

That's it. You should be able to access all the APIs at port 80 of your public IP address.


The official GitLab repositories might include more information:
https://gitlab.syncad.com/hive/hivemind
https://gitlab.syncad.com/hive/HAfAH
https://gitlab.syncad.com/hive/jussi


Related post: How to run a HAF node

Feel free to ask anything.



cat-hive-pixabay

Sort:  

I really want to learn coding but I don't know what's wrong with me 😩... The whole process seems more complicated anytime I tried to learn.

Wow it's really nice to know about all this and thanks for sharing.

I don’t understand anything in developing, and I admire your smart 🤗💗

Drone is better than Jussi :)

I see developers like geeks! smarty pants!

Wow, even if I don't understand this much, I commend your effort in putting all of this together.

Outstanding!

Nice, thanks for sharing I learned a lot

hi, nice to meet you, I read your presentation, I don't understand this lesson, but this is very interesting for me to learn, thank you for sharing this presentation, I hope you can stop by my channel to see my new post

Congratulations @mahdiyari! Your post has been a top performer on the Hive blockchain and you have been rewarded with this rare badge

Post with the highest payout of the day.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Check out our last posts:

Rebuilding HiveBuzz: The Challenges Towards Recovery