Some of you who have been developing STEEM apps and custom tools for STEEM in Python will know or know about my old projects asyncsteem (python2 + Twisted)and txjsonrpcqueue (Python2+Python3 + Twisted&asyncio).
Both projects were (are) asynchronous Python JSON-RPC aimed primarily at use with STEEM.
I'm leaving STEEM and those code bases behind me now, and I've recently started working on a new asynchronous Python JSON-RPC library for HIVE, and I'm keeping a close eye on BLURT as well and will try to support that as well when public API nodes become available.
I'm taking lessons learned from both asyncsteem and txjsonrpcqueue and starting with an all new code base on the new project named hivequeue.
In this blog post I want to give a first status report on progress on this new libraries, with a few details on its design. The hivequeue library, like its predecessors isn't a general use HIVE library for Python. It is meant for, and best used for, specific types of HIVE projects.
Basic design of hivequeu
the hysteresis queue
Hiveque in design is similar to txjsonrpcqueu, but with important architectural differences that make it more friendly towards API nodes. The base concept is that of an in-memory asynchronous hysteresis queue. The hysteresis queue by itself creates a different, yet quite robust error model regarding resource handling, something that can often turn out to be a problem in reactive asynchronous programming. You set the high watermark and the low watermark on the queue, and RPC called either get queued into awaitables or queuing them fails instantly.
The low level RPC clients.
On the other side of the queue are what basically boils down to clients as workers. One per activated public API node.
These clients are invisible to the user of the library, and should not be used directly.
The rate limiting subsystem
This is where hiveque will differ greatly from its predecessors. Hivequeue client workers work in close conjunction with a rate limiting subsystem. If supported by the public API node, hivequeue will respect draft-polli-ratelimit-headers-00 rate control headers. If not, the rate limiting subsystem has a built in fake polli rate-limit implementation that can do fully client side discretionary rate limiting.
Block-stream piggybacking
The client workers work with JSON-RPC batches when possible. When possible, a batch with room will get padded with requests for recent (or slightly less recent) blocks. These blocks are used to feed registered callbacks for specific events.
Server side keychain API
As stated, hivequeue isn't meant to be general purpose. Most signed operations won't be supported directly on the Python side of things. When however hivequeue is used for the server side of some kind of web application, then an abstraction is to be provided that allows server side Python programmers to initiate request from the client side keychain code.
This part of the library is still in early planning.
custom_json only Python side signing.
The hiveque library is explicitly NOT meant to write any kind of voting bots or automated spam bots with. In fact, I as the author will go out of my way to prevent any such in my eyes abusive applications be made possible by hivequeue. One essential part of functionality that should (in term) become possible to perform with hiveque though, are custom json operations.
First little milestones.
A working client-side rate limit setup.
While the worker clients aren't working yet, the first piece of working code now is a client side discretionary rate limiting implementation. You can specify the window size in seconds and the number of permitted JSON-RPC batch requests per window. There is also the optional possibility to ask the rate limit subsystem to smoothen requests. By opting for this option, the rate limiting will result in lest bursty usage of the rate limit windows/
In theory the rate limit should work with headers from a draft-polli-ratelimit-headers-00 server, but as the client workers aren't ready yet, I haven't been able to test this yet.
A config JSON per public API node config file format.
As can be seen in this directory, I've come to a simple but quite complete JSON file format for specifying parameters for the use and capabilities of public API nodes. I hope that when the library is ready, API-node maintainers will send me pull requests with updates for their node configs, or better yet, provide me with a github link that I (and users f my library) can use as git submodule from my repo.
An example of a config JSON:
[{
"host": "api.hive.blog",
"protocol": "https",
"enabled": true,
"rate_limit" : {
"simulate": true,
"smoothen": true,
"window": 60,
"limit": 600
},
"batch": {
"enabled": true,
"max_batch": 20
},
"api": {
"chain": "HIVE",
"version": "0.23.0",
"apikey": {
"enabled": false
}
}
}]
The config allows to specify desired discretionary rate limiting info as mentioned above. It allows the specification of maximum JSON-RPC batch sizes that can differ between API nodes, and last but not least allows specifying what APPBASE version is supported by the node and it needed, what specific sub API's are disabled or enabled for this node.
### API-node API keys
Another sample JSON for an idea I've been toying with that I'll be sure to implement in the library:
```json
[{
"host": "demo.hive.timelord.ninja",
"protocol": "https",
"enabled": false,
"port": 6443,
"rate_limit" : {
"simulate": false,
"smoothen": true
},
"batch": {
"enabled": true,
"max_batch": 1
},
"api": {
"chain": "HIVE",
"version": "0.23.0",
"apikey": {
"enabled": true,
"optional": true,
"mode": "query_field",
"acquire": "hq_api_challenge",
"batch": {
"enabled": true,
"max_batch": 50
}
},
"sub_api": {
"include": ["condenser"]
}
}
}]
The basic idea here is that the owner/maintainer of a full API node might be willing and able to provide a basic service level to the general public, but that a specific project, lets say something like the old flag-war reports I did from @pibarabot in the past, requires a higher service level. The owner of the project and the owner of the full API node could come to an agreement that using the API with an API key would elevate the available service level for the specific project.
more to come
As stated, there really isn't anything to use or try just yet. I'm still in early stages of development. I'll try to write a short blog post like this whenever there are new noteworthy milestones. I'm hoping to have a first beta quality version of hivequeue operational before the end of summer. Any ideas or suggestions on what youve read here above are highly welcome.
Great to read your story. Wish you all the best for Hiveque project. Your effort will be the source of inspiration for new developers and coders.
Your post has been curated with @gitplait community account because this is the kind of publications we like to see in our community.
Join our Community on Hive and Chat with us on Discord.
@tipu curate :)
I have picked your post for my daily hive voting initiative, Keep it up and Hive On!!
I read your publication carefully. I always wonder what is behind the HIVE code, how it works, who are its developers, among other questions that I ask myself daily as a Python trainee. Thanks to these publications I am encouraged every day to continue learning and researching.