You are viewing a single comment's thread from:

RE: One-Block Irreversibility for Delegated Proof-Of-Stake (DPOS)

in HiveDevs3 years ago

I like this change and am curious what type of overhead this has for the p2p network. Do the p2p improvements you mention cover this additional overhead?

There has been talks of speeding up transaction times to allow for better interactivity for users. Potentially 2 or even 1 second block times. The general feedback it is definitely possible at least in theory. Would 2 or even 3 second block time be possible with this additional overhead?

Like I mentioned in another comment, it is my experience most dapps use HEAD so they accept the risk (knowingly or not) of reversible blocks in favor of speed. Exchanges I believe are the main ones using IRREVERSIBLE. For most dapps they would see near similar speeds as HEAD but the protection of IRREVERSIBLE, which is a great best of both worlds scenario.

Sort:  

I like this change and am curious what type of overhead this has for the p2p network.

The overhead on the p2p network is very small: it's one small transaction generated by each of the producing witnesses. So in the normal case, it's 21 small additional transactions that travel over the network, but don't get recorded into the blockchain.

Do the p2p improvements you mention cover this additional overhead?

The p2p improvements vastly outweigh the additional overhead of these 21 transactions (it's not even close). Here's an example that only node operators are likely to understand of how substantial the improvements are with the new p2p code: on the mirrornet, with 2 nodes in the US and one overseas in Europe, the block offset times of all the witnesses are actually negative now (handling the same traffic as the mainnet).

There has been talks of speeding up transaction times to allow for better interactivity for users. Potentially 2 or even 1 second block times. The general feedback it is definitely possible at least in theory. Would 2 or even 3 second block time be possible with this additional overhead?

Yes, it won't have any impact at all. The latency improvements I mentioned above would probably allow 1 second block times, especially if we were willing to accept a few more missed blocks occasionally. But switching to 1 second block times now that the chain is launched still wouldn't be trivial, because there's a lot of code that assumes unchanging block times.

For most dapps they would see near similar speeds as HEAD but the protection of IRREVERSIBLE, which is a great best of both worlds scenario.

Yes, that's another of the driving reasons for the change.

Would switching from 3 to 1 second block times also scale the network by 3X? I assume the block sizes would stay the same as they are now. I believe I remember reading a post somewhere about the eventual risk that blocks would fill up if we saw mass adoption. I am not sure if that post had much validity though or if that is even much of a limitation.

Would switching from 3 to 1 second block times also scale the network by 3X?

Well, if block size remained the same, you could store more data, but the easy option if we want to scale to process more data would just be to increase the block size.

This latter option doesn't even require a code change: if enough witnesses vote to increase the block size, it will increase automatically. The "con" to increasing the block size is just that it means the blockchain can grow faster and right now the block size puts an upper limit on that growth, so it can be viewed as a "safety net" versus a spam attack.

But increasing the block time offers another benefit that can't be gained by increasing the block size: decreased latency between when a transaction is broadcast and when it is accepted into the blockchain.

This can be interesting for things like games. A game will wait until a transaction gets accepted into a block before it will fully process it. So with the block time being 3s, the game will process the transaction about 1.5s (on average = 3/2s) after the transaction is broadcast, with a worst-case time of 3s (that's a slightly simplified model, but let's go with it). With a block time of time of 1s, the game will respond in about 0.5s (on average = 1/2 s), with a worst-case time of 1s.

Changing block time is an absolute nightmare, I wouldn't dare to go that route. I think faster block times for interactive games can be achieved easier. Even with some second layer consensus with multiple "application specific block producers". Each ASBP can accept incoming transactions in zero time, reacting immediately on a temporary side chain (broadcasting accepted transactions to other BPs of the same app) and then makes that side chain permanent by including it in custom op on HIVE. Of course that solution opens a lot of issues - how to broadcast side chain between blocks (maybe that is not needed if different ASBPs were working independently, sort of like different instance servers of some MMORPG - they only need to communicate final outcome), will the side chain fit in custom op (who knows) or more importantly why does that app need HIVE in the first place (to keep record on a proven chain with many nodes and established economy, to make reaching second layer consensus easier, stuff like that).

Changing block time is an absolute nightmare

This topic is being discussed from time to time and I would love to read the longer explanation from your point of view why that's the wrong path.

Ok, a bit more but still briefly.

It is a high cost, high risk, low reward endeavor. First it might seem like all it takes is to change the constant that governs block interval. But there are plenty of places in the code where author(s) directly state that it is not prepared for different block times (guarded with static assertions) and even more places, where such assumption was made silently. Especially parts where stuff happens every block would need to be carefully reviewed. There is really no shortage of work that needs to be done instead of that.
Second, shorter blocks pose more problems with network communication (it is not an accident that with half-second blocks, EOS BPs produce 6 consecutive blocks each during their scheduled time). There is also more overhead.
Finally there is really not that many applications that can't work with 3s blocks but would be fine with 1s blocks (and those that are should really be based on EOS - one of the benefits of having many different chains in the crypto space). If anything, there is higher chance that application actually needs as fast as possible communication, where 1s blocks won't suffice. In such case there are two possibilities. First case: the app does not really need to store interactive data. Like previously mentioned MMORPG. When you are f.e. engaged in competitive PvP battle, server has to take over all communication - it can't go through blockchain (it only needs to be the same server for the battle participants, different matches can be handled by different servers). But in the end data on your exact position in every millisecond of the match is not needed to be permanently recorded. Server just needs to send transaction with final outcome, number of points gained, consumed or exchanged items, rewards given, etc., so other servers of the same app can properly update their state. Second case: when app really needs to record everything, f.e. due to regulatory requirements. Like if you wanted to make stock exchange. In such case the most reasonable approach would be to split the work between independent servers (at most one server per market pair) and either record everything on separate public side-chain(s) linking to HIVE with hashes only and records of finalized trades, or (most likely after filtering bloated HFT activity) put content of temporary side-chain inside custom op transactions (assuming it would fit).

To sum it up: a lot of effort that is better utilized elsewhere, risk of bugs, technical difficulties and in the end it does not really help.

Oh, cool. I didn’t realize the block size was so easily modified. I thought there were more technical/resource restrictions that were in play.

Awesome, thanks for the breakdown. I'm totally onboard with it.

We can estimate the overhead somewhat by observing that the signatures are going to be maybe 100 bytes roughly, so 2k bytes for all of them. Blocks are currently limited to 64k so this is around 3%. If block size goes up it would be less. (Ignores various details about how transactions and blocks are transmitted around, but close enough for discussion.)

That was a pretty good estimate, actually. It's done as a regular transaction (eg. has TAPOS data, etc) and it looks like the transaction will vary between 98 and 115 bytes in size (depending on size of account name of the witness).