Hivemind / HAF how to save and restore the database for development purposes

in #doc2 years ago (edited)

Hello,

image.png

mindjourney generation: "soviet era poster of workers putting bees in a server"

As I've been working on hivemind again, my workflow has required a lot of "test x thing, revert the database to previous state, change the code then test again"

so I figured I'd share my workflow.

Why not resync the normal way

It would be best to do a proper removal of the database and then do a fresh sync every time, but we can't do that because a full sync from 0 to 5 000 000 blocks takes about half an hour. And we can't afford that whenever we want to test something 😅

What we want

We want to get a dump of the database with all the mocks except the ones I intend to work on synced up to the block where our mocks would otherwise kick-in. For me since I work on communities, that's util 4 997 990.

Then we want our workflow to be:

  • delete database
  • Restore our database dump to 4 997 990
  • add our modified mocks
  • execute the hivemind indexer to go through the remaining blocks and test our updated logic.

Installation

Follow the setup from:
https://peakd.com/hivemind/@howo/haf-powered-hivemind-dev-environment-setup-guide-according-to-my-findings-it-may-or-may-not-be-the-best-way

Sync HAF to 5m blocks then when you populate mocks, here's a handy script to remove the file you want from the mocking dirs:

it's super crude but it gets the job done

#!/bin/bash

# Move the mocked ops somewhere else
mv /home/howo/projects/hivemind/mock_data/block_data/community_op /home/howo/projects/hivemind

# Set the environment variables
export MOCK_BLOCK_DATA_PATH=/home/howo/projects/hivemind/mock_data/block_data
export MOCK_VOPS_DATA_PATH=/home/howo/projects/hivemind/mock_data/vops_data
export PYTHONPATH=$PYTHONPATH:/home/howo/projects/hivemind

# Execute the mocker script
python3 /home/howo/projects/hivemind/hive/indexer/mocking/populate_haf_with_mocked_data.py --database-url postgresql://postgres:root@localhost:5432/haf_block_log

# Move the directory back to its original location
mv /home/howo/projects/hivemind/community_op /home/howo/projects/hivemind/mock_data/block_data

Now you can sync hivemind we will do things differently so that it stops at a specific block:

./cli.py --database-url postgresql://postgres:@localhost:5432/haf_block_log --test-max-block=4997990

when that's done (it'll take a little while). We can finally dump our database:

pg_dump -j 10 -Fd -f dump.dump -h localhost -p 5432 -U postgres -d haf_block_log

And at this point we have our base setup. so you can always come back to this "save state".

Adding the remaining mocks and syncing to 5m

Similar thing to the other script except we just add the specific mock file we were missing, make sure to do that in a different shell or reset the environment variables

# notice how it's now only the community_op directory
export MOCK_BLOCK_DATA_PATH=/home/howo/projects/hivemind/mock_data/block_data/community_op
export PYTHONPATH=$PYTHONPATH:/home/howo/projects/hivemind
# Execute the mocker script
python3 /home/howo/projects/hivemind/hive/indexer/mocking/populate_haf_with_mocked_data.py --database-url postgresql://postgres:root@localhost:5432/haf_block_log

and now we can sync hivemind all the way to 5m blocks:

./cli.py --database-url postgresql://postgres:@localhost:5432/haf_block_log --test-max-block=5000011 --community-start-block 4997991

Notice that I added community-start-block this may not be needed depending on your workflow.

Restoring the database

So finally you ran your test, and want to revert to the previous version I tried some versions of --clean to try to get the restore to work but never got as good as a result as if I just destroyed and recreated the entire database, here's the script I use:

before you execute it, make sure you copy /etc/postgresql/14/main/pg_hba.conf and /etc/postgresql/14/main/postgresql.conf somewhre (I put them at /) so that you don't have to edit things manually

# destroy and recreate the db
sudo pg_dropcluster 14 main --stop
sudo pg_createcluster 14 main
sudo systemctl restart postgresql.service
# remove password, see the other guide as to why
sudo -i -u postgres psql -c 'alter role postgres password null;'

# remove config files and copy paste the good config files
sudo rm /etc/postgresql/14/main/pg_hba.conf
sudo rm /etc/postgresql/14/main/postgresql.conf
sudo cp /pg_hba.conf /etc/postgresql/14/main/pg_hba.conf
sudo cp /postgresql.conf /etc/postgresql/14/main/postgresql.conf
sudo chmod 777 /etc/postgresql/14/main/postgresql.conf
sudo chmod 777 /etc/postgresql/14/main/pg_hba.conf
sudo systemctl restart postgresql.service

# setup HAF
cd /home/howo/projects/hivemind/haf/scripts
sudo chown -R postgres:postgres /home/howo/projects/hivemind/haf/scripts/haf_database_store
sudo chmod -R 700 /home/howo/projects/hivemind/haf/scripts/haf_database_store
sudo ./setup_postgres.sh
sudo ./setup_db.sh --haf-db-admin=postgres

# restore the database
pg_restore --section=pre-data  --disable-triggers -h localhost -p 5432 -U postgres -d haf_block_log /home/martin/projects/hivemind/dump.dump
pg_restore -j 20 --section=data --disable-triggers -h localhost -p 5432 -U postgres -d haf_block_log /home/martin/projects/hivemind/dump.dump
pg_restore -h localhost -p 5432 -U postgres --section=post-data --disable-triggers --clean --if-exists -d haf_block_log /home/martin/projects/hivemind/dump.dump

And now you're ready to go again, sync the extra mocks and sync hivemind back to 5m :)

Sort:  

https://leofinance.io/threads/seckorama/re-leothreads-27r2xgeyo
The rewards earned on this comment will go directly to the people ( seckorama ) sharing the post on LeoThreads,LikeTu,dBuzz.

When HAF is going to be finished for programming? I thought it was going to be ready by the end of last year.

it's done and live for a bunch of applications already, it's just constantly being improved upon

OK, that's great. And what about @blocktrades' smart contract platform built on it? How is it going?

Congratulations @howo! You received a personal badge!

Congratulations, you made it and joined us at the Web3 Berlin Conference (Germany)

You can view your badges on your board and compare yourself to others in the Ranking

Congratulations @howo! You received a personal badge!

Happy Hive Birthday! You are on the Hive blockchain for 6 years!

You can view your badges on your board and compare yourself to others in the Ranking

Check out our last posts:

Hive Power Up Day - July 1st 2023
Announcing the Travel Reimbursement Fund for HiveFest⁸

My previous @v4vapp proposal has expired. I have a new one which is running but unfunded right now. I'm still running @v4vapp and all my other services.

I've just updated v4v.app and I'm getting ready for some exciting new features after the next hard fork.

Please consider asking your friends to vote for prop #265 or consider unvoting the return vote.

For understandable reasons in the current crypto climate it is harder to get funded by the DHF, I accept this so I'm asking a wider audience for help again. I'll also add that I power up Hive every day and usually power up larger amounts on 1st of the Month. I'm on Hive for ideological reasons much more than for only economic benefit.

Additionally you can also help with a vote for Brianoflondon's Witness using KeyChain or HiveSigner

If you have used v4v.app I'd really like to hear your feedback, and if you haven't I'd be happy to hear why or whether there are other things you want it to do.