A long time ago, in a block far, far away…
No, wait, this is NOT the story you are looking for ;-)
This part was going to be filled with retrospectives, the origins of the Steem Pressure series, stories about events that caused us to fork from Steem but that would be just a waste of time, and time is a resource that we can’t afford to waste.
TL;DR: Steem is no longer what we knew it to be.
“No Ned to worry.”
- Anonymous
Our future is in our hands.
We are Hive now.
Paint it Hive
Time to update promo materials.
New Net, New Nodes
Since many of us have just upgraded their toys to Hive, at the time of the HF23, most of what we used is now Hive compatible.
Seed nodes
seed.openhive.network:2001 # gtg
seed.roelandp.nl:2001 # roelandp
hiveseed-se.privex.io:2001 # privex (SE)
steemseed-fin.privex.io:2001 # privex (FI)
seed.liondani.com:2016 # liondani
hived.splinterlands.com:2001 # aggroed
seed.hivekings.com:2001 # drakos
node.mahdiyari.info:2001 # mahdiyari
anyx.io:2001 # anyx
seed.buildteam.io:2001 # thecryptodrive
hive-seed.lukestokes.info:2001 # lukestokes.mhth
hive-seed.arcange.eu:2001 # arcange
seed.chitty.me:2001 # chitty
API nodes
https://api.openhive.network
https://api.hive.blog
https://anyx.io
https://api.hivekings.com
https://api.pharesim.me
https://hived.hive-engine.com
https://rpc.esteem.app
https://hived.privex.io
https://techcoderx.com
DIY
If you want to run your own node, here are some quick tips that can be useful:
Seed Node
Configure your build with:
cmake \
-DCMAKE_BUILD_TYPE=Release \
-DLOW_MEMORY_NODE=ON \
-DCLEAR_VOTES=ON \
-DSKIP_BY_TX_ID=OFF \
-DBUILD_STEEM_TESTNET=OFF \
-DENABLE_MIRA=OFF \
-DSTEEM_STATIC_BUILD=ON \
../hive
Depending on your needs and resources, you might want to use either
ENABLE_MIRA=OFF
or ENABLE_MIRA=ON
.
config.ini
for a seed node can be as simple as that:
plugin = witness
p2p-endpoint = 0.0.0.0:2001
This is intended to be used as a seed node, but you can easily extend it to be more useful by enabling a webserver endpoint and useful APIs such as block_api
or network_broadcast_api
. However, if you choose to add a plugin such as account_by_key
or market_history
you will have to replay.
API Node
If you’ve read my Steem Pressure series you know that I no longer use a monolithic node. Instead, I use jussi
to route specific methods to specialized endpoints.
- Account History Node (non-MIRA)
- Fat Node (MIRA)
- Hivemind
Please note that in my setup the Fat Node itself is not enough to feed the Hivemind instance because of the lack of a market_history plugin. Not an issue in my environment, because I’m running both nodes, and I prefer to run plugins on the low memory node where possible.
Account History Node - reference configuration
Configure your build with:
cmake \
-DCMAKE_BUILD_TYPE=Release \
-DLOW_MEMORY_NODE=ON \
-DCLEAR_VOTES=ON \
-DSKIP_BY_TX_ID=OFF \
-DBUILD_STEEM_TESTNET=OFF \
-DENABLE_MIRA=OFF \
-DSTEEM_STATIC_BUILD=ON \
../hive
We can’t use MIRA here, because we are going to use the pre-MIRA implementation of account history plugin: account_history_rocksdb
.
Here’s reference config.ini
file:
log-appender = {"appender":"stderr","stream":"std_error"}
log-logger = {"name":"default","level":"info","appender":"stderr"}
backtrace = yes
plugin = webserver p2p json_rpc
plugin = database_api condenser_api
plugin = witness
plugin = rc
plugin = market_history
plugin = market_history_api
plugin = account_history_rocksdb
plugin = account_history_api
plugin = transaction_status
plugin = transaction_status_api
plugin = account_by_key
plugin = account_by_key_api
plugin = block_api network_broadcast_api rc_api
p2p-endpoint = 0.0.0.0:2001
p2p-seed-node = gtg.openhive.network:2001
transaction-status-block-depth = 64000
transaction-status-track-after-block = 42000000
webserver-http-endpoint = 127.0.0.1:8091
webserver-ws-endpoint = 127.0.0.1:8090
webserver-thread-pool-size = 256
Fat Node - reference configuration
Configure your build with:
cmake \
-DCMAKE_BUILD_TYPE=Release \
-DLOW_MEMORY_NODE=OFF \
-DCLEAR_VOTES=OFF \
-DSKIP_BY_TX_ID=ON \
-DBUILD_STEEM_TESTNET=OFF \
-DENABLE_MIRA=ON \
-DSTEEM_STATIC_BUILD=ON \
../hive
For the fat node, I use the MIRA build
Here’s reference config.ini
file:
log-appender = {"appender":"stderr","stream":"std_error"}
log-logger = {"name":"default","level":"info","appender":"stderr"}
backtrace = yes
plugin = webserver p2p json_rpc
plugin = database_api condenser_api
plugin = reputation
plugin = reputation_api
plugin = block_api
p2p-endpoint = 0.0.0.0:32001
p2p-seed-node = gtg.openhive.network:2001
webserver-http-endpoint = 127.0.0.1:8091
webserver-ws-endpoint = 127.0.0.1:8090
webserver-thread-pool-size = 256
Storage needs:
As always, make sure that you have very fast storage.
Node type | Storage |
---|---|
AH Node | 600 GB |
Fat Node | 400 GB |
Hivemind | 300 GB |
In the next episode, I will write more about such setup and required hardware and about how long it takes nowadays to build it from scratch.
![Hive_Queen](https://images.hive.blog/DQmSJUo4g9AmoVFoAbs6gzMw6coVURUQKg7URtBSfkEj5oJ/Hive_free-file.png)
Really cool video! I think it would look great as a .gif too :)
I'm afraid that it's too long and too complex for a gif in a reasonable size.
Original is rendered in Full HD (1080p).
I plan to release few more promo videos that will be more suitable to become gif animations :-)
I can’t remember the exact name and I am being too lazy at the moment to look it up, but at one point steemit was looking to incorporate state files (I think this was the name of it?) for faster replay times. Is there anything like that in the works for hive? I am not sure how that would work in a decentralized manner. It was my assumption that steemit was planning to keep these in a centralized way.
Yes, they even had an idea of "Platform Independent State Files" (with save/load feature) back in 2018, but they also had Ned as CEO.
Even now, you can use periodically saved state as a backup to avoid replay. That however has some disadvantages: you need to shut node down (or use separate nodes only for a sole purpose of creating such backups, so called "state providers"), it can be transferred between systems but they have to be compatible to some extent; same build environment, compatible CPU (sets of instructions), etc.
That's for example very useful for spawning multiple seed nodes in a short amount of time: replay once, transfer to many, start it up.
Exactly. I am just thinking ahead to hardfork issues that would take down the chain for many days like in the past.
Seems to me like there is potential here for a HPS proposal to pay for a few of these state file specific nodes in the case of chain restarts. They could be distributed to various top witnesses or something like that. Just thinking out loud.
Thanks for the update. You missed one API node:
https://techcoderx.com
Thank you, added :-)
Very cool!! Thank you for the updates. I am very interested in running a node and learning how to build apps for Hive. I almost had busy integrated with my website but it didnt quite do what I wanted it to do. I am hopeful that I can get my project Hive powered this year!!
Excellent!
Let's hope the Empire does not strike back...
Very cool!! Looks nice.