I have found a significant piece of information missing in the api definitions:
https://developers.hive.io/apidefinitions/#bridge.get_ranked_posts
I have found out, that there are at least 3 more Query parameters for bridge.get_ranked_posts
:
{"limit": 21, "start_author": "", "start_permlink": ""}
Where api.hive.blog accepts a limit
between 1:100.
If your expected result is any longer than 100, you need to call again, and reference:
start_author
the author of the last post of the results (without @),
start_permlink
the permlink of the last post of the results.
I also made a post where I demonstrate the usage:
https://peakd.com/hive-dev/@felixxx/javascript--hive-api-bridge-to-fetch-all-posts-for-a-certain-tag
I would like to fix the definitions.
And offer some explanations and perhaps an example.
I made an account with syncad.gitlab.com.
I looked at it for a moment and decided that it would take me too long to even open an issue.
I would be happy, if someone more experienced with gitlab and the contribution guidelines and all of that, fixes it. I would also very much like someone help me and walk me through how I would have to go about this, as I am sure, that I will find a bunch more errors and gaps there.
I have already written a very angry post about it, which no dev has read.
And that was probably for the better.
I have since found out that I used the wrong tag/category/community.
Reposting without cussing and this time under #hive-devs. I also joined the Discord guild.
You might want to look at
_readme.txt
files in tests as a reference/help, f.e.:https://gitlab.syncad.com/hive/hivemind/-/blob/develop/tests/api_tests/hivemind/tavern/bridge_api_patterns/get_ranked_posts/_readme.txt
These
_readme.txt
files are not a documentation on how it should work, but were made based on actual Hivemind code when the tests were made (before HF24 - I'm not sure why they all appear to be only 2 years old, perhaps a result of GitLab failure some time ago).bro 'hivemind' was the keyword here. I was browsing through hive itself.
Did you join the main hive discord? I can't dev but you can make connections there to get it fixed.
Maybe @howo who works as part of the main dev team?
I joined hive-devs discord.
The bigwigs are all there.
Reblogged for visibility
https://gitlab.syncad.com/hive/devportal/-/issues/77
1 year.
I guess, nobody cares.
I can give a reason :o)
A lot of interfaces, especially in Hivemind, are riddled with problems. And it is far easier to abandon interface than to change it. Current trend is to push development of second layer applications towards HAF, where they can pull data they need directly or through PostgREST. So yeah, there is little interest in documenting stuff that those who use it already know, and those who don't know are not supposed to use.
So it is gatekeeping.
So this software did never scale, like I was told.
It was my first impulse to think this would never work if the blockchain (and therefore the tables) would grow. I was told that will never be an issue. By the same people who built hivemind for years.
I have looked at HAF and since that requires pretty pricy machines (you need to run your own node to start with), it can not be the right solution.
I always wondered: If I got a node running and that already has a form of DB, why not query that DB directly?
Anyways, we are back at the main problem: Lack of transparency, working behind closed doors. Little communication.
Steem had (and still has) enormous problem with scaling. In comparison Hive is light as a feather and a lot of people are working on making it even lighter. Computers are also getting better and cheaper all the time. On the other hand blockchains (not just Hive) are getting heavier. I hear that even the old grandma Bitcoin is over 400 GB today. The question is what is going to grow faster - the hardware or the data. So far hardware wins big time, I'm running node on my home computer and it doesn't interfere with my other activity in any way, my development machine can run multiple nodes in the same time.
I'm not running HAF though and last time I was running old Hivemind, the chain was couple million blocks lighter and I was doing that on separate server. I don't know enough about HAF, but my understanding is that if you are a service provider (you develop and maintain your own app on Hive), you will have two choices - either to run your own node with HAF parameterized down to the data your app actually needs, or to deploy it on some "full HAF" server that has all the data and can host many apps (I think the latter can only be sustainable long term if the operator of such node receives some compensation from the devs of apps hosted on such server... or from DHF).
HAF is heavy, but it is still in its infancy and a lot can still be done. I think it is yet to prove itself in regular production environment. There are two aspects - is it able to keep up with life sync even with bigger blocks (current mainnet data rate is really not challenging) and how queries are going to be affected when its size doubles or triples. Both issues are investigated and problems addressed, but I'm only observing it from sidelines, so that's where my knowledge on the topic ends. We are also making tools to be able to flood it with max sized blocks (I'm pretty confident that the node itself will not be overrun by blocks that are filled with 150-200 times more transactions than typical blocks we have on mainnet these days, but how HAF is going to react is anybody's guess).
People in IT are typically not known for great communication skills. We let the code talk 😜
Are you going to any RL events? I'd buy you all the drinks you want.
edit:
It's probably better I let you work.
Thanks for the answers.
At least there is someone smart working on this stuff. What gets communicated and reaches me, never satisfied me. But if you spent your time explaining to ppl like me, some layers deep in the comments, you'd never get anything done.
I want to write a post about how hivemind was DOA, but then you'd think twice before feeding me information again. Have a good one. I'll go back to building my own stuff. I'll build client-side stuff that strains the nodes, so somewhere, someone is forced to stick their head out. I hope it isn't you, or that you get paid enough.
To willingly expose myself to the horrors of human interaction? No way! 😄
I can help you with that and explain the 'social code' and its applications, just like you explained the above to me.
You'd have to buy the drinks then, though 😝
But I read u. :)
I am always here for you. Although i can't help in this matter.
xD you so cute again
Alot of my friends left hive. I am worried to be honest. I don't want more to leave us 😞
Great initiative in identifying missing information for bridge.get_ranked_posts.. Keep up the good work! 👍
I will spider the development site, you can do the same and we can send each other patches as we discover mistakes.
If you check the comments under this post, you will find out, why that is wasted energy. (at least for all
bridge
api calls)