Running a Hive witness is typically done on rented hardware due to the demanding space and reliability requirements. Up until now, this required storing the entire block log on disk which currently requires 507GB with an additional 100GB for artifacts and ~24GB for the state. This requires almost 650GB of very fast storage, ideally NVME. This puts it out of reach of most vendors that offer VPS and dedicated server offerings.
Starting with 1.27.7, you are able to use what I call a "rolling block log" or specifically the new block-log-split config option. This allows you to only keep X blocks on the file system reducing the space requirements of a node by hundreds of GB. While this is little use for a full node that needs access to all chain data, for a witness this is a huge improvement.
I have been testing this feature and have tested it using multiple configurations, to which there are two main options to consider.
- Shared Memory on disk
- Shared Memory on tmpfs
The first option requires very little ram (~4GB) but is slower to to get up and running. The second option requires the entire state file to be in ram but gives you the fastest possible performance. Currently, this requires ~24GB of ram just for the state file and a little more for the OS and processes.
You can also use a hybrid option where you get a node to the head block using tmpfs and then take the node down and move the state to the file system, although this only really helps if you want to reboot the node easily, as you can't temporary access additional ram.
There is very little you need to do to make this work, for the most part all you need to do is change two lines in your config.ini and sync your node. Using a snapshot, I have been able to get a witness node online from nothing in around 7 minutes. This requires having a compatible snapshot available.
block-log-split option
This is the parameter that enables the new rolling block log feature, which officially is known as block log split. This feature splits the block_log into pieces and determines how many pieces you want to save. Each "piece" is 1M blocks, and using a block-log-split = 0 will save no pieces of the block_log to disk. This means the only space requirements is the binaries, configuration file, and the state file as well as a few logs. Using block-log-split = 1 will allow you to have limited access to recent blocks using get_blocks() if you want to interact with the node.
You will need to either use a compatible snapshot, or resync your node to use this new feature. Before you do, it is recommended you use at least version 1.27.8 as a few bugs have been fixed.
The easiest way to get 1.27.8 is grab the binary from @gtg.
wget https://gtg.openhive.network/get/bin/hived-1.27.8
You can also build the binary yourself using this guide.
There currently are not any compatible snapshots, so I created my own. You can find it here.
This snapshot is only compatible with the following settings:
plugin = witness condenser_api network_broadcast_api account_by_key database_api state_snapshot
You will need to run the same plugins or you will have problems restarting your node. This is an open issue and should be resolved in the future.
block-log-split = 0
This snapshot only works with block-log-split = 0. If you need to access your node to get blocks and monitor blockchain events from it, you will need at least block_log_split = 1 and will require you do a full sync.
For the most part, all you need to do for most people is make sure you are running the same snapshots as I do, and adding the block-log-split parameter. You can then just load the snapshot above using the following to launch your node.
`./bin/hived-1.27.8 -d /home/hived/hived --load-snapshot=witness'
You will need to create a snapshot directory under your hived folder and unpack the witness.tgz (4.2GB) file there for it to load the snapshot. You can remove the witness.tgz once you have unpacked it.
Before doing this, you will need to decide if you want to run the shared memory on disk or tmpfs (ram). Here is what the requirements look like in the two configurations.
Disk
- ~4GB-8GB ram
- ~30GB disk space
Tmpfs
- ~28GB ram
- ~1GB disk space
Keep in mind, you need some breathing room for both configurations to download snapshot, and any additional copies of anything you are working with.
Example config.ini
log-appender = {"appender":"stderr","stream":"std_error"}
log-appender = {"appender":"p2p","file":"logs/p2p/p2p.log"}
log-logger = {"name":"default","level":"info","appender":"stderr"}
log-logger = {"name":"p2p","level":"warn","appender":"p2p"}
backtrace = yes
plugin = witness condenser_api network_broadcast_api account_by_key database_api state_snapshot
#shared-file-dir = "/run/hive"
shared-file-size = 24G
shared-file-full-threshold = 9500
shared-file-scale-rate = 1000
enable-stale-production = false
required-participation = 33
p2p-seed-node = hive-seed.arcange.eu:2001 # @arcange (BE, Wavre) (ASN: Proximus NV)
p2p-seed-node = anyx.io:2001 # @anyx (CA) (ASN: OVH SAS)
p2p-seed-node = seed.thecryptodrive.com:2001 # @thecryptodrive (DE, Esslingen) (ASN: Hetzner Online GmbH)
p2p-seed-node = seed.buildteam.io:2001 # @buildteam (DE) (ASN: Hetzner Online GmbH)
p2p-seed-node = seed.hivekings.com:2001 # @drakos (DE, Nuremberg) (ASN: Contabo GmbH)
p2p-seed-node = gtg.steem.house:2001 # @gtg (PL) (ASN: OVH SAS)
p2p-seed-node = seed.openhive.network:2001 # @gtg (PL) (ASN: OVH SAS)
p2p-seed-node = hive-seed.roelandp.nl:2001 # @roelandp (FI, Helsinki) (ASN: Hetzner Online GmbH)
p2p-seed-node = node.mahdiyari.info:2001 # @mahdiyari (FI, Helsinki) (ASN: Hetzner Online GmbH)
p2p-seed-node = hived.hive-engine.com:2001 # @aggroed (FI) (ASN: Hetzner Online GmbH)
p2p-seed-node = hived.splinterlands.com:2001 # @aggroed (FI) (ASN: Hetzner Online GmbH)
p2p-seed-node = rpc.ausbit.dev:2001 # @ausbitbank (FI, Tuusula) (ASN: Hetzner Online GmbH)
p2p-seed-node = seed.chitty.me:2001 # @chitty (FI) (ASN: Hetzner Online GmbH)
p2p-seed-node = hiveseed-fin.privex.io:2001 # @privex (FI, Helsinki) (ASN: Hetzner Online GmbH)
p2p-seed-node = seed.liondani.com:2016 # @liondani (GB, Maidstone) (ASN: HostDime.com, Inc.)
p2p-seed-node = hiveseed-se.privex.io:2001 # @privex (SE, Stockholm) (ASN: Privex Inc.)
webserver-thread-pool-size = 32
p2p-endpoint = 0.0.0.0:35000
block-log-split = 0
You will need to make sure /run is large enough (25GB-30GB is good) to store the shared memory file if you choose to use tmpfs.
Using my hardware, I was able copy the files over, unpack the snapshot, and get a node to head block in around 7 minutes using tmpfs and under 20 minutes using disk (dual NVME raid 0).
You also can opt to resync your node and not use a snapshot which will typically take 24-48 hours. At this point, you can create your own snapshot using --dump-snapshot=
parameter when starting your node.
1.27.9 should be considered stable soon, and by the time you decide to give this a try, you may want to use 1.27.9 instead.

This also allows you to run hive witnesses on mini pcs that pull like 4watts which is pretty damn amazing.
Hive likely has one of the most efficient blockchains when it comes to block production coupled with running on these ultra low power computers.
Obviously we can't have everyone running 'light nodes' but you can technically run a full node on these machines too if you the space.
API nodes are the single biggest power user in the whole suite but even that is getting crazy efficient and performant with every update.
Hive, technology wise has never been in a better place, I hope we can someday move from performance and maintainability to more innovation and being a front running in new interesting tech.
I actually have a N100 16G 512G NVME running a node.
I dread the day nodes can fit on smartphones. However, I yearn for that day when every user can be a node, OOTB.
Thanks!
Might save this for future reference.
!discovery 37
Well this is a huge win for everyone running a witness. Fantastic news!
32 GB of ram seems like it would be cutting it kind of close to run the Tmpfs option. probably better to bump your hardware up to 64GB I am guessing? Unless you are doing hosted I guess. Then you can make it pretty much anything as long as you have it available.
32g is fine.
!PIZZA
$PIZZA slices delivered:
(6/10) @danzocal tipped @themarkymark
This post was shared and voted inside the discord by the curators team of discovery-it
Join our Community and follow our Curation Trail
Discovery-it is also a Witness, vote for us here
Delegate to us for passive income. Check our 80% fee-back Program
Note that
block-log-split = -1
will keep the old monolithic format, i.e. one big giant 500GB block_log file.by the way even without snapsot now massive sync takes less then 2 days which is big improvement from split hive/steem
Since the fork there has been a lot of improvements focused on optimizations.
Full set of .artifacts is 2GB, not 100 :o)
I have not tested it, because I don't have HDD anymore, but block log should not require fast storage - once it is in live mode, it only needs to write couple kilobytes per 3 seconds. Even during replay HDD should be just fine.
One of the features of split block log is the ability to share all but last part. You can have multiple nodes that use the same block log parts and related artifacts symlinked to their .blockchain folders from the same actual storage location. Only the last part that is being written by each node separately needs to be stored locally for each node (but once it is filled and next part file is created, you can copy new filled part to shared location, stop the node, symlink and restart). That way all your nodes can fully participate in p2p syncing and provide block_api functionality while you only have one copy of block log. That feature is useful if you have multiple nodes on servers in the same location (for example for different services, load balancing or witness with bastion node).
Depends on how long it has been running for. Mine is 97G and slowly growing. I believe mine is experiencing a bug, because most of my nodes are ~2GB, but some are growing.
I believe this is what someguy does with his hosted witness node service.
I wonder how does it even work (what is the content of your .artifacts) - valid file contains header (
artifact_file_header
) and then one 24 byte record for each block (artifact_file_chunk
). If your file is bigger it definitely points to a bug. Your log might already contain some info pointing to source of a problem (look for messages originating fromblock_log_artifacts.cpp
or any warnings/errors related to block log).I was going to blow the node away since my other nodes it is 2.2GB or so, but this one it just has been growing, but if I can get some information on where this bug is coming from, it would be helpful to resolve it.
This is great news! I might give it a try on my laptop which has 504 Gig free space with 1TB SSD and the following specs:
With Fiber connection that changes IP only a couple times a year would this actually be a viable setup for a witness?
With actual speed test results of:
I wouldn't use a laptop for a witness, but it should be able to run it.