Hive Engine P2P Launch Details
As noted here, this post serves as a detailed update for the coming P2P layer launch.
Witness Instructions
To set up a witness for the coming P2P layer launch, follow the instructions in the Hive Engine wiki here.
In particular, note the server requirements and witness specific instructions such as enabling the witness in the config and how to register the witness using the provided script.
Alternatively, an excellent step-by-step guide by @rishi556 can be found here: https://peakd.com/@rishi556/how-to-set-up-a-hive-engine-witness-step-by-step-guide
Witness approval weights come from staked (and delegated) WORKERBEE, and an account can approve up to 30 witnesses.
More details about the witness contract are also provided in the wiki here.
Release Timing
- Feb 3, 01:00:00 UTC - Primary node update with up to date code, and deployment of the witness contracts. The witness process is scheduled for a later block. Witnesses can get set up using the DB snapshot to be provided.
- Feb 4, 01:00:00 UTC, specifically hive block 51022551 - Witness scheduling kicks off. If enough witnesses have registered and approved at this point, then the consensus process will start. Otherwise it starts automatically whenever this condition is satisfied.
Technical Details About the Release
The recent release of the hive engine node software introduces a hash computation difference, which means we needed to do a full replay from the genesis block to correct the hash behavior. This is important before deploying the P2P contract because once the process starts, the hashes will be pushed to the sidechain, and without making this change another witness node that does decide to replay from the genesis block will be off from the consensus. It would mean that all witness nodes would have to be derived from a particular snapshot to match, which I suppose is not the end of the world but is less than ideal. (And I suppose there is a way to fix it even later if all witnesses coordinated on a new fixed state). In any case, the coming primary node update will make use of a DB snapshot that has consistent hashes obtained from replaying from genesis, and replicating the same block operations as the current primary node.
Thankfully, the previous update allowed me to sync and compare to the primary node data and preserve the DB state right before a difference in block data would have been introduced, where I could change the core node software to account for these anomalies. This is how I found out the exact balances affected by the previous hack, and it is also how I found out that the primary node skipped processing of two blocks for various reasons (one was likely due to a core node update that changed how the transactions were being parsed, and the other I do not have an explanation for).
The other change I made was to increase the sync speed by prefetching blocks in parallel, as the main bottleneck in the existing code was the time spent fetching the block information one block at a time. This was essentially motivated by how hivemind does block syncing (and does the same parallel block fetch). Replay times from the sidechain genesis block were reduced from the order of weeks to 2-3 days.
Will the new snapshot have those two blocks processed?
No. The core node software was tweaked to exclude them as well, so backup matches primary, as well as future replays using this code.
Ok thats good. Which hive ref blocks where they?
43447729
44870101
Need to pay attention to this...