At the moment we are not working on this. First, we want to support 'read only' applications like account_history or hivemind, but I don't see any reason why the framework cannot be extended in the future to support 'read/write' applications.
You are viewing a single comment's thread from:
What is exactly a 'read/write' application? I thought hivemind could write to the blockchain.
When I follow someone on hive I can see that in block explorers, so isn't hivemind writing to the blockchain?
No, hivemind doesn't write to the blockchain itself. It can only write "2nd layer" information, derived from 1st layer (blockchain) data. When you follow someone on hive, your hive client app actually generates and signs a custom_json transaction (with a follow command embedded inside) that is sent to a hived node to get incorporated into the blockchain itself.
Hived just adds this new transaction to the blockchain, but it doesn't do any analysis/interpretation of the follow (because custom_json operations don't "do" anything as far as the first layer is concerned).
But hivemind sees this operation in the blockchain stream it gets from a hived node and sets the appropriate 2nd layer data so that API calls to hivemind return the appropriate results based on your follow list.
Dstor it!
Thank you, great explanation. So getting back to the original comment on this thread, I still don't get where the question comes from.
If hivemind writes a custom_json to the blockchain, without any interpretation, then as long as you have access to the hivemind code (for example, if it's open-source), you can use that code and read the hive blockchain to re-interpret those custom-jsons and verify everything on your own.
So, every HAF app will be writing custom_jsons on hive, and if the HAF app code is open-sourced, other apps can communicate with that app, given the data is now public.
It just needs some extra work compared to actual writing.
Yes, that's essentially correct.
In fact, I expect most cooperative HAF apps will directly communicate via a shared HAF database . This means that one app can directly read 2nd layer data generated by another HAF app (when the administrator of the HAF servers sets permissions appropriately). This will enable extremely fast and efficient communication, far beyond what is possible today via RPC calls between apps.
So how is every HAF app writing custom_jsons on Hive if there is no read/write ability in HAF
I think it's not. From what I understand it's only reading user broadcasted jsons, then executing the smart code based on those jsons(like hive-engine). Maybe later it will broadcast for the user, if a user grants access to a certain app?
ok. if this is true.
this is no better than the code I am already running.
Thanks for the info
Wow. Thanks for simplifying this into a great knowledgeable piece for everyone to pick it up from it. I love it this way when discussion like this is stretched out. Thanks I followed you @alpha
Maybe I wrongly expressed my thought, currently, HAF does not offer any help with broadcasting transactions (which means it does not help with writing new information into the chain), so when an application wants to react with sending transaction it must deal with this task on its own
how far down the roadmap is the ability for read/write applications?
Right after the moon.
We will be so ready for this bull run in like 100moons
We're not thinking about that at the moment, we need to finish what we're doing now - efficient reading and presentation of blockchain data. I suppose no one will prevent the community from extending HAF to support transaction propagation if it is important to them