Building Custom Calls for Hive - An Example

in #dev2 months ago (edited)

Context

I wrote a post, where I demonstrated how to build calls for Hive's JSON-RPC endpoints.
To show that this can have certain benefits over using predefined libraries like BEEM, I came up with an example, that can do one particular job faster than BEEM.

To keep the post short and tight, I did not comment too much.
Writing such posts isn't easy, because I never know, who reads them.
For some people everything code is like magic. For others, the stuff I work on seems so far below their level, that they don't even understand my problems.

But this time, I had a reader. Someone who also tested my examples.
Thanks @arc7icwolf!

He tested the examples and then went ahead and tried building his own calls, based on Hive's developer information.
That's exactly what I intended: enabling readers to explore that on their own.

He has since run into problems and replied to me.

Here is my attempt to demonstrate my workflow with this stuff....

get_discussions_by_author_before_date

https://developers.hive.io/apidefinitions/#condenser_api.get_discussions_by_author_before_date

Another great example for how incomplete these docs are.
The expected JSON response according to the docs is: []
An empty list. 😡 ...what a useful call! The response is a constant []

I don't like anything get_discussions - I had trouble with those before.
I raged about this stuff...
The higher level functions, that precompile data a specific way, seem to get worse the more abstract they are. Generally, for beginners, I'd recommend sticking to block_api and account_history_api and other low level, core features.

But I think we can make it work... (I am building as I write)

Parameters

In my last dev post, I used database_api.get_dynamic_global_properties, which takes no parameters. That makes it easy to demonstrate the concept with.

get_discussions_by_author_before_date takes parameters, and again the docs aren't helpful:

Query Parameters JSON: ["", "", "1970-01-01T00:00:00", 100]
Why publish these docs and leave them empty? 😡

Anyways, I assume it takes at least 4 parameters.
3 strings and one integer.
The example from the docs provides more help:

"params":["hiveio","firstpost","2016-04-19T22:49:43",1]

I have done this before, so I have an idea, how that's supposed to work.
Without more context or prior knowledge, that API is unusable like this though.
What anyone can see: the call takes 4 different parameters.

  • hiveio (string)- an account name
  • firstpost (string) - That's a post identifier
  • 2016-04-19T22:49:43 (string) - that's a date in a specific format
  • 1 (integer) - that's the batch size, most likely (1 will return a list with 1 item)

Example Parameters

For a test shot, I will simply use fixed parameters.

"params":["felixxx","python-requests-for-hive-json-rpc","2024-09-14T10:00:00",10]

  • felixxx - that's me!
  • python-requests-for-hive-json-rpc - an identifier of a post of mine
  • 2024-09-14T10:00:00 - today, 10 AM (blockchain runs on UTC)
  • 10 - batch size of 10.

Test Code

import requests

url = 'https://api.hive.blog'
params_string = '"params":["felixxx","python-requests-for-hive-json-rpc","2024-09-14T10:00:00",10]'


data = '{"jsonrpc":"2.0", "method":"condenser_api.get_discussions_by_author_before_date", '+ params_string + ',"id":1}'

response = requests.post(url, data=data)     
print(response.json())

I am not using f strings. I prefer to use + operator to manipulate strings.
That is not recommended, but I find it easier to read, use and understand.
Generally considered as poor code, though.

That gives me a response, which looks like a list of 'discussion' objects or something.

You can define a start- and end-point by using the date and post-identifier, much like here:
https://peakd.com/@felixxx/hive-api-reference-incomplete
...to collect more data than the batch size allows.
I am a bit too annoyed to explain the details right here...
I am not getting paid for any of this...

Conclusion

I made the function work. It's a useless function, it seems.

I didn't know how this would end, as I coded it while writing this post.
I made it work, but have no real idea what the response data represents.
I have also no desire to waste any more time exploring this call.
I hope it works as an example for @arc7icwolf and helps him along his path.

@vandeberg built this.
It took years, cost a fortune, nobody bothered documenting it properly.
At the time, I was so impressed by these developers.
Now that I caught up a little, I think much less of them.

Most of them went to a different company and work for @andrarchy now.
I am sure they are building great things this time. 🤣

Sort:  

Thanks for taking the time to explore this method: following your workflow is already very useful for me, as it helps me understand if I was doing something right or if I was completely off-road.

I'm doing some progress with my scripts and it's all thanks to you! I know that my stuff is very basic, but if I can keep doing even small progress I'm sure one day I will be able to look back and be surprised of how much better I have become :)

i run into same problems, 2 years ago

Date is not used! Here you find this information:
https://developers.hive.io/tutorials-recipes/paginated-api-methods.html#tags_apiget_discussions_by_author_before_date

Note: The before_date param is completely ignored. This method is similar to get_discussions_by_blog but does not serve reblogs.

You wrote this, 2 years ago:

You can also enter a permalink as second parameter and you will see that you get other permalinks.

Do you understand pagination?

Yes. You have to add last permlink you get in results, to get the next 10.

I was too lazy to type, asked chatGPT, but then he set permlink as 3rd parameter instead of 2nd. But I noticed it and here is the correct code:

https://jsfiddle.net/hive_coding/5wfojx4u/

So you can only ever get to the next 10 (or however many you call up).
Something like (1 2 3 4 5 .... 10 ): jump to the 5th page, is not possible, unless you already make all calls and always remember the permlink.

Thats why i would prefere HiveSQL for Calls getting informations link this.

'Yes' would have been enough.

!LOL

I found nasty month-old leftover mac and cheese in the fridge.
It was a case of age-related macaroni degeneration.

Credit: reddit
@felixxx, I sent you an $LOLZ on behalf of mein-senf-dazu

(3/8)
Farm LOLZ tokens when you Delegate Hive or Hive Tokens.
Click to delegate: 10 - 20 - 50 - 100 HP