Below is a list of some of the Hive-related programming issues worked on by BlockTrades team during the past work period:
Hived work (blockchain node software)
Updating RC cost calculation code
We continued to analyze the resource credits code. We’ve written some code to dump rc costs during replay of the blockchain and created some graphs to analyze how the rc resource pools change over the blockchain’s operational history. Our most significant finding so far is that the execution time costs are totally inaccurate right now, largely because signature verification time wasn’t accounted for at all.
To get better estimates of real world execution time costs, we’re probably going to create a tool to measure execution times for various operations during replay as a starting point for updating them to accurate values, with a separate cost calculation that accounts for signature costs based on number of signatures used by the transaction containing the operations.
Testing via new “mirror net” technology identified a new bug (now fixed)
As mentioned a while back, we’ve been developing a tool for taking an existing block_log (e.g. a block_log from the mainnet) as a starting point for launching a testnet that more closely matches the configuration of the mainnet. This technology is conceptually similar to the idea behind the older tinman/gatlin code, but is designed for higher performance.
The new mirror net code is already proving it’s worth, as we found a bug in the hived code while testing the mirror net code, whereby the reward balance could go negative. For more on this bug, see the fix and associated issue: https://gitlab.syncad.com/hive/hive/-/merge_requests/306
In the longer term, we’ll be integrating this technology into our build-and-test system (continuous integration system) for various advanced test scenarios.
Finished testing and merged in Command-line interface (CLI) wallet improvements to develop branch
Complete improvements for offline use of the CLI wallet:
https://gitlab.syncad.com/hive/hive/-/merge_requests/265
Add default value to the server-rpc-endpoint option:
https://gitlab.syncad.com/hive/hive/-/merge_requests/273
Other hived-related work:
Finished testing and merged in fixes for sql_serializer and account history plugins: https://gitlab.syncad.com/hive/hive/-/merge_requests/289
https://gitlab.syncad.com/hive/hive/-/merge_requests/294Merged in changes for HBD limits for HF26: https://gitlab.syncad.com/hive/hive/-/merge_requests/297
Removed obsolete option to
SKIP_BY_TX_ID
from compile options: https://gitlab.syncad.com/hive/hive/-/merge_requests/301Fix for problem with faketime library on some platforms: https://gitlab.syncad.com/hive/hive/-/merge_requests/303
Updated some API pattern tests based on bug fixes: https://gitlab.syncad.com/hive/hive/-/merge_requests/299
Improved testtools robustness when there is a temporary communication interruption to nodes being tested:
https://gitlab.syncad.com/hive/hive/-/merge_requests/302Updated to use newer clang-tidy linter (now uses the default one on Ubuntu 20):
https://gitlab.syncad.com/hive/hive/-/merge_requests/300Compile all targets with boost > 1.70 available on Ubuntu 20.04: https://gitlab.syncad.com/hive/hive/-/merge_requests/307
Hive Application Framework: framework for building robust and scalable Hive apps
A lot of our work during the last period has continued to focus on app framework development and testing.
We continued to work on code cleanup associated with the new HAF repo (this is the new repo mentioned last week that contains the components that are common to all HAF-based applications that was created to better manage version compatibility among HAF components and prerequisite applications such as hived).
A lot of documentation was added and/or updated, more testing by 3rd party testers (like me) to ensure instructions are clear and accurate on “clean systems” (i.e. not the developer’s computer), fixes were made for build and test compatibility on both Ubuntu 18 and 20 (although Ubuntu 20 is still the recommended platform for any HAF-related development), etc.
A new API call, hive.connect
, was added which handles database inconsistencies that can potentially arise if the connection between hived and the postgres server is broken, allowing serializing to be smoothly resumed. https://gitlab.syncad.com/hive/haf/-/merge_requests/13
We also added system tests for the sql_serializer to the new HAF repo: https://gitlab.syncad.com/hive/haf/-/merge_requests/17
With this addition, we have a full test suite for all the components contained in the HAF repo.
Optimizing HAF-based account history app (Hafah)
This week we continued to do performance testing and optimization of hafah.
We added a library that allows us to track memory usage, and the latest incarnation with memory optimizations was able to sync all the way to the headblock using only 4GB of virtual memory while configured to use 7 threads for sending/receiving data. Previously using this many threads required nearly 128GB of memory, so it was a useful improvement.
Further improvements were also made to the sync process, which at least in the 5M block scenario that was tested, reduced sync time from 280s down to 180s. This improvement still needs to be benchmarked with a full sync to headblock, but it is likely we'll see similar improvements in performance for a full sync.
Benchmark multi-threaded jsonrpc server for HAF apps (using hafah as our “test” app)
Preliminary benchmarking using a multi-threaded jsonrpc server showed 2-3x speed improvement in api performance for an experimental version of hafah synced to 5M blocks (2-3x performance measured relative to the original single-threaded jsonrpc server), but we still need to repeat these benchmarks with a version of hafah synced to the current head block.
Hivemind (social media middleware app used by social media frontends like hive.blog)
As mentioned last week, there was a bug detected during our production testing with some notifications showing as from 1970.
We’ve fixed this bug and the new version with this fix is now being tested on production. Assuming no problems, we’ll tag an official release with the bug fix in the next couple of days.
Work in progress and upcoming work
- In progress: experiment with generating the “impacted accounts” table directly from sql_serializer to see if it is faster than our current method where hafah generates this data on demand as it needs it. This task will also require creating a simplified form of hafah. In fact, the new hafah would be so simplified on the indexer side, that we’re considering adding additional functionality to it, to allow it to maintain a per-block account balance history so that there is something that does real work in the indexing portion of the code. This would also be useful as a template for future work on 2nd-layer tokens.
- Release a final official version of hivemind with postgres 10 support, then update hivemind CI to start testing using postgres 12 instead of 10.
- Run tests to compare results between account history plugin and HAF-based account history apps.
- Finish setup of continuous integration testing for HAF account history app.
- Finish conversion of hivemind to HAF-based app. Once we’re further along with HAF-based hivemind, we’ll test it using the fork-inducing tool.
- Fix rc cost estimations for execution time as described in hived section of this post.
- Deploy new version of condenser that displays RC level for signed-in account.
The account @blocktrades.com is downvoting a lot of accounts. Probably for following the spamminator's trail. I don't know if the account belongs to you, but if it is, please review these votes. There are many users suffering from centralization and lack of a better way to monitor transgressions on the platform. I have a lot of admiration for the work you do and I would not like to have to withdraw my investments here, my house. Since Steem. Came hare after cybil attack, now i'm under attack. Thanks for the attention...
This account is following spaminator's trail. But I've asked it to be used to only downvote outright spam/phishing posts and not plagiarism (which I understand is what your post fell under), so I guess that will be adjusted soon.
Thank you very much for your attention @blocktrades.com and for the reply. They accused me of recycling my posts. But actually I was translating my posts in two different posts. I aim to promote the development of the Portuguese language, and the English language has greater reach here. There is still no good solution for translations,aesthetically, and of course other priorities in development. And I believe that translating is a job not recycling posts. A lot of people work with translation in my country. I explained to them and they didn't understand. Thank you very much! and for everything you do for the Hive.
Why did you downvote us?
We already have questions about that Spaminator thing and now you have joined him?
This is frustrating.
Yeah I agree with what you are saying, if this continues to happen it will be a debate on the hive I think. Because of things like this, it will make curators who lose rewards on posts that they give upvotes which they think are not plagiarism or they don't know that the account that they provides upvote is not plagiarism, that the HW project wants to earn money by taking the curator's reward on their posts curation.
blocktrades.com /spamminator is a bot or real user?
Blocktrades is the lead developer of the Hive software, and critical to the growth of the Hive platform. Spamminator is a bot founded by a Witness, who has a team (HiveWatchers) behind her. It's an interesting bot, which serves to protect the platform in theory. But many problems too. They forget that customer service is critical, and they tend to be more punitive than educational and constructive.
ok thanks for the explanation, because both of you are voting negatively on some of my publications, they are not spam or plagiarism, only photographs that I take with my cell phone. I don't understand why they vote me negative, what criteria they follow, etc.
It seems that a lot of work has been done this week, thank you. For the documentation, it's planned to include diagrams/graphs? (I know I'm repeating myself) I have the impression that your team members have reached a new level of intrinsic knowledge of the HIVE core code. How many of them now have this level of knowledge?
Yes, there's several diagrams in the new HAF documentation to show data flow and algorithms.
I would guestimate we have at least six devs who have a lot of experience with hived (and other sibling chains such as BitShares, Peerplays, EOS, and BEOS), and we have several other devs who are gaining familiarity with the code base.
Probably a similar number who've worked on hivemind, which isn't technically core code, but is responsible for a lot of the social media behavior of the blockchain.
that's good to hear, I hope @inertia will be able to resume the updates of the developer portal to make it a must have.
I had fun yesterday going to see what became
STEEMITafter almost 2 years stuck in their immobilism, it allowed me to take the necessary distance to realize how much HIVE has grown well and that we have acquired a robust, efficient and motivated core developer. So, knowing that you have so many HIVE skilled devs on your team plus the involvement of so many other great devs bringing their contribution is all the more reassuring from an internal as well as external perspective about our future.Congratulations @blocktrades! You have completed the following achievement on the Hive blockchain and have been rewarded with new badge(s):
Your next payout target is 1410000 HP.
The unit is Hive Power equivalent because your rewards can be split into HP and HBD
You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word
STOP
To support your work, I also upvoted your post!
Check out the last post from @hivebuzz:
Support the HiveBuzz project. Vote for our proposal!
Interesting. Not sure how the impact will be with the RC things and all the software optimizing hive.
What do you expect will be ready on the next HF and whats about the sidechain you are working on? :D
I'm not sure yet on the next HF date. Hopefully we can stick to the original timeframe of around December/January, but I have one "secret-for-now/yet-to-be-announced" feature that I really want to get into this hardfork (important enough to delay HF by a month or two if necessary to get it working), and we haven't started on it yet, so we'll see how it goes. Other features for the hardfork are either finished or in a state that I'm very comfortable with.
HAF work has progressed very well and I think we'll be ready for production release shortly after HiveFest. At that point we can start working on smart contract sidechain (it will be built on top of HAF).
"secret-for-now/yet-to-be-announced" - I guess HMT (ERC20 capable token contracts)? ;)
$WINE
No, not HMTs, as that potential feature has been discussed many times, so nothing too secret about the idea.
This is an improvement that hasn't been suggested before. But before I propose it for inclusion in the HF, I wanted to have it proved out and close to a deployable state.
That feature has been discussed many times, but we still not initiated the development :(
Anyway curious to hear the news, hope to see it with the HF26
Cheers~
Shortly after we started Hive development, we reviewed the state of the SMT code, and at that point we decided it was better to create more efficient and flexible 2nd layer tokens that support smart contracts instead of continuing the SMT work.
There was still a ton of work to be done on the SMT code, and they didn't support smart contract functionality anyways. The SMT code was tightly focused towards launching configurable-ICO coins with voting capability, but not a lot else.
Thanks for the explanation.
"The SMT code was tightly focused towards launching configurable-ICO coins with voting capability, but not a lot else."
I had no idea of this before, but I could remember it's a ICO based something. Hope we will focus on this feature after next HF. We have Hive Engine for now, but I think we should compete against BSC & tokens should listed at external dexs.
HIVE seems the best platform/token we all can trust. Fee less, scalable, speed we already got here. Hope we will bring ERC20 capability one day in 2022!
Cheers~
Congratulations, @theguruasia You Successfully Shared 0.100 WINEX With @blocktrades.
You Earned 0.100 WINEX As Curation Reward.
You Utilized 1/1 Successful Calls.
Contact Us : WINEX Token Discord Channel
WINEX Current Market Price : 0.321
I was about to say "wen next hf?" but after reading your comment I guess early spring is the most likely date. Anyway it looks like is will be an even bigger deal than the last one, right ?
I think the biggest deal will be release of HAF. Strictly speaking, it's not part of the hardfork, however, and will be released before the hardfork.
HAF is like a Christmas gift for the blockchain then :)
We were chatting the other day about how smoothly this blockchain works with the heaviest transaction load, free and almost instant transactions. Takes a good group to keep Hive progressing smoothly and I am glad your hands are on the wheel.
it is fascinating how i don't understand almost nothing from this. Except that it works better now :D
Congratulations @blocktrades! Your post has been a top performer on the Hive blockchain and you have been rewarded with the following badge:
You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word
STOP
Check out the last post from @hivebuzz:
Congratulations @blocktrades! You received a personal badge!
You can view your badges on your board and compare yourself to others in the Ranking
Check out the last post from @hivebuzz:
Congratulations @blocktrades! You received a personal badge!
Thank you for your participation in the HiveFest⁶ Meetings Contest.
We truly hope you enjoyed HiveFest⁶ and it's been our pleasure to welcome you in the AltspaceVR world.
See you next year!
You can view your badges on your board and compare yourself to others in the Ranking
Check out the last post from @hivebuzz: