@Howo:
Cool. So, new month, new things.I'm still working on the Coinbase integration.
I did most of the easy stuff, which is basically the querying APIs. Now I'm going onto the actual gritty, which is building the transaction. It's a bit tricky because they want us to be able to generate a raw transaction in one endpoint and then basically get the raw bytes.
@blocktrades:
I'm sorry, I didn't quite hear what you said at the very top.
@Howo:
Oh, sorry.
@blocktrades:
What you said.
@Howo:
I'm working on, so current base requires Rosetta, which is
@blocktrades:
Coinbase. Coinbase. Okay, got it.
@Howo:
And so, Crim is in very early talk with them. I mean, they've been in talk with them for a long time. But the state of the discussion is like, do you have Rosetta? Yes, no, no, okay, talk to us when you have it.
@blocktrades:
Okay
@Howo:
We may never get integrated, but it's not a high cost to just build it.
@blocktrades:
Yeah, that makes sense. Just do it, absolutely.
@Howo:
And worst case, I hear some other exchange may be integrated with Rosetta as well. So, good news for them. And I mean, worse comes to worse, it's still an interesting project for people to look into it.
@blocktrades:
So, what do you have to do to do it? How are you doing it?
@Howo:
So, it's basically, I need to integrate into a Docker file something that launches a full node. Well, without half, without type mine. Just like, because I basically, Coinbase doesn't care about the social layer. They just want the...
@blocktrades:
Yeah, they want to be transactions.
@Howo:
Yeah, the money layer.
So, luckily, I can skip all of the heavy, most of the heavy parts. And so, and then on top of it, I'm building a backend slash. I mean, singular problem for HiveD, where it's right now, it's HiveDS-based because Wax requires TypeScript, and I'm not confident enough in that language to really build something that's Coinbase-compliant. So, I'm just building. And also, it's just a Rosetta API. And so, yeah, basically, it's a set of APIs that you need to build with specific parameters. Most of which do not apply to HiveD, I mean, to Hive as a whole, so it's a bit tricky. Like, for instance, they ask about an endpoint to return to Mempool, but we don't have a Mempool. So, you can skip that endpoint, but that gives you an idea because the spec is made to be to match all of the blockchains. Sometimes, there is stuff like, when was this block mined, and we don't match, et cetera, et cetera, et cetera. So, yes, it's mostly very boring stuff where you just spend a lot of time building APIs with a ton of different parameters that returns the same thing.
@blocktrades:
Okay, so you're basically starting by creating a bunch of dummy functions and then filling them in with functionality.
@Howo:
Yeah, pretty much. I don't expect this will take that much time. The only thing that will be, that's a bit, not scary, but that will be more difficulties building the raw transactions.
@blocktrades:
Oh, that's probably where Wax, if anything, would be the most useful would be there.
@Howo:
Because I know how to do that on Ethereum and Bitcoin, but I don't know how transaction actually built on HiveD.
@blocktrades:
Yeah, I think that actually might be the place where you would consider looking at Wax would be for the transaction building.
@Howo:
But is it possible to sign a transaction offline and then submit it later? Or is there like a time thing on HiveD?
Bartek:
Yes, it is possible. And you just need to use another tool, BeKipper, which will allow to hold your private keys safely. And then you can prepare offline transactions, sign them, then somehow transfer to other place. And all just at different time broadcast them. And all such things you can quite easily, I think, do using Wax interface. Some examples are done, I hope. We will prepare more snippets very soon, maybe next week. And they will show how to do that maybe for every operation which is present in Hive. Right now there are some short snippets which present some parts of code.But I plan to prepare some just working examples from begin to the end just to reuse it directly at client side. And, in my opinion, I will be to push it directly to types of playgrounds. But before that we need to change few things. For example, move this package directly to NPM registry from our D-Club. And that needs some work here.
@blocktrades:
Okay, so I guess I would just hold off on the transaction part till the end and work on the other parts.
Bartek:
Yes, I think, as I could understood, maybe it will be also helpful to use Wax just for calling APIs from given note. Because it allows typed interface, to use typed interface of APIs to do such calls. And it can make things easier and faster to find some bugs or other problems. Because it also defines some validators or responses and requests. Examples or some simple examples are all defined already in Wax snippets. That’s part of D-Club repository which is accessible from code 3, then you can select snippets folder and there are few examples just as present as simple parts of code.
@blocktrades:
So I properly understood what Bartek just said. I think he's saying that there's been some pre-checks added to the code. So, you know, HiveD doesn't always give the best error responses when there's a problem with the parameters you pass. And I think what he says is verification in the library to help report better errors.
Bartek:
Yes, and of course you can also define all validators and all APIs. And that's quite a straightforward process and once done once, probably next time will be even simpler. The code is really simple and if you want, I can point parts of code you can use as examples and follow your needs. Because not all HiveD APIs are already defined in Wax because we didn't want to push too much code there. And everyone can make very easy own call which he needs and just has a minimal application which uses few calls. What probably is in most cases needed.
@Howo:
You see, Wax is exclusively TypeScript, right? Because I got the repository and there was some Python bindings.
Bartek:
Yes, yes. At the end, Wax will be also providing Python interface. And actually, it already provides some functionality to Python, but you can of course try to use it. There is also some library called Helpy, which, and I hope name is starts from help, not from help. But in my opinion, it is too easy to seriously use it because we would like to build object interface over this library. But as I know, our guys, by using Helpy library and BeKeeper tool also wrapped into Python, just was able to produce some huge amount of transactions because we did some performance tests here. So it is definitely possible. There is also some module called Schemas, which provides types specific to Hive operations, some APIs. So probably everything provides some tools for good coding, but this is not yet so good integrated as in TypeScript. So you need just to know a lot more details to start from Python instead of TypeScript right now. But of course, it is possible and some things works. You can create transactions and sign it and next broadcast using Hive API. So such flow works, but as I know, it is much more low level specific than TypeScript implementation.
@blocktrades:
Yeah, I mean, you're pretty comfortable with JavaScript, so I would say TypeScript would be the place to go.
Bartek:
Yes, I think if you have some application, you just need to transpose from API data to another API or just SQL storage, I don't know. TypeScript could be a good solution. And a few months ago, we have also developed some example library called Worker B. This is some bot library which reads some data from Hive D endpoints, for example, streamed blocks or streamed transactions. Maybe that's an interesting part for you. And then it allows to process them in some way and produce some other outputs, also sign it. So that's another aspect which can be interested for you if you are producing transactions and broadcast them, because I'm not sure if it is your guys also.
@Howo:
Okay, I see. That's interesting. Well, I haven't started on all of the transactions side, so I'll probably message you offline after when I get to that part.
@blocktrades:
Okay, sounds good.
@Howo:
That's about it for me.
@blocktrades:
Okay. As far as what we've been working on, I guess everybody knows the main thing we did is we released the new version of the stack for all the half API node code last week. So far, I mean, we've had a few reports back. Everything I think has been smooth so far. I haven't, I don't know of any major problems, but we're still waiting for feedback from a bunch of the operators just to see if anything goes.They have any problems, but I'm not really expecting anything. We've been testing it quite a lot for quite a long time now. So I think it's all going pretty smooth. So at this point, we're starting to look at basically what we're going to do next. And we've started, I guess, well, we're also just continuing all the projects we've got currently going on like wax, that's a good example. And there's a bunch of other ones, the command line wallet, a lot of progress still going on there. We're working on denser, which is the condenser replacement. The half block explorer, while it's released, we're still making quite a lot of improvements to that as well, especially the UI side of it. I think the base API layer is pretty reasonable at this point, but I'm sure we'll still be tweaking it and especially as we develop the UI. But the UI is where most of the work is going right now, as far as I can tell. And let's see what else. I guess the main thing, the other main thing, so we're kind of now that we've got a stack where we can quickly develop and release new stuff, and in sort of more incremental way than we've had previously. We expect to have a faster design cycle going forward. So we're looking to release new versions of hive mind and half block explorer probably before, you know, in the next few months, I hope. We'll see at the end because we are going to make some dramatic changes to hive mind still just to increase this, make its scale faster and things like that. That's, hive mind is the one thing we did a fair amount to it, I won't say we didn't, we did quite a bit, but there's still a lot more that we know we could do on the hive mind side. The other stuff we released even like half block explorer in some ways I think it's more mature because it was developed originally as a half application, whereas hive mind was was not originally half application so still got a lot of legacy code in there. Before we knew as much as we know nowadays about how to develop this kind of stuff. And of course we didn't create it originally either so there's that. But overall, so we are going to release like I said a new version of hive mind. And then beyond that we are looking forward to you know what we might do in terms of other apps too so I've already mentioned I think one of the biggest things we'll be doing next will be releasing a light accounts app. And this will be really important for the second layer ecosystem, just a common way that we can have apps, anybody can create apps that can interact with accounts, the accounts that are created this way. We'll probably be working closely with the VSC guys on that too just to be sure that we have an alignment on how we're thinking about all that stuff but so far, I've looked at what they've done and it looks to me like it, everything they're doing can just fit nicely on top of the stuff we're doing already. So I don't think there's going to be any issues there. And then beyond that we're looking at what we'll be doing to hive D2 we've got a meta issue out there it's got a ton of stuff that we've had thinking about doing for a long time and you can sort of some out of can't even go through it it's too many issues but you feel free to review it it's out there and the hive repo isn't it is a like a big issue. I think of the issues there. One of the most significant is the one we've talked about for a bit which will be the concept of not having having light nodes hive D nodes that don't need to store the entire blockchain. This will primarily be for apps I think will be one of the drivers for this is that so we can develop more lightweight apps that don't need to store a whole block log. You can always just use an API node for your data but if you want to make your if you want to make your app totally independent of anything else except the blockchain data itself, and not having to keep a block log around will really help in that certainly will help for half, half servers and things like that, or they won't have that's a big chunk of data that no longer has to be stored there locally. And let's see what else. We're also still looking to do some performance improvements to hide the itself. I think one of the more interesting things we're looking at overhauling some of the memory management that might give us some significant speed ups. It's, it's, and I can't say for sure because we haven't tested it yet but in our previous experience and other code bases we've seen that sometimes had a dramatic improvements, whether it will be the case for have the years hard to say yet, but I have my hopes. And I guess along the lines of performance improvements. Bartek tells me today that they finished completing a test which we've been thinking about for a while which is, we've tested high D before with using larger block sizes so right now our block size is 64 K. I mean, which is, you know every three seconds that that serves quite a bit of data, and it's not been a problem to date, but inevitably as we grow will need to handle larger block sizes. The block chain itself is designed, since we know the peer to peer layer was set at a limit of two megabytes. When we first developed it. And so we have done tests in the past on high D we've done these like, I guess you can solve surge test where we have a huge number of transactions and test nets. And we've already proved out that high D can handle the two megabyte blocks, but we had never tested it with half. So, you know, we had some concerns this is a whole new subset of, which is basically now storing a huge amount of important data. And we needed to make sure that it could also handle those larger box and Bartek told me today that the test they've done so far seemed to indicate we're not going to have a problem there either so that's. And I mean without even any making any, I think any adjustments to half itself in just the code as it currently is operating was able to operate live sync with two megabyte size blocks being sent out in the test net situation. I think that's extremely positive. You know, we'll still be doing some more testing but I think we're on the right track there so we have lots of headroom I mean just to put that in perspective, a two megabyte block is about 30 times larger than current blocks so that's if we're not filling up our normal blocks right now at 64 K we have plenty of headroom on that on that side now. I guess that's, there's a lot of stuff going on but I don't want to talk up the whole time talking about individual stuff so I'll pass it on to anybody else who wants to talk about their working on now.
Bartek:
Interesting part specific to this performance test related to two megabyte blocks is that everything has been generated using standard interfaces of high G so broadcast APIs and Python tools, which also used to wax and be keeper to prepare and sign and broadcast such transactions so it's backdoor interfaces which offer some faster way that's everything done on regular notes and yes on single server but but they are there were used several notes and several processes just to head to communication between them, but other this part. Everything was done as usual so it told me it's very good surprise and really good results that it seems to work and the server is not so so fast actually we today was surprised because half results was verified on some regular developer the machine which is not the best and has just 32 gigabytes of RAM. And just regularly the machine so this is even not the faster cell server. So, that's amazing that everything seems to working so good. And, as I know from collected stats. Block processing, including data dumps. The measure it probably near to 300 on millisecond. That's that was average time of starting and processing even block.
@blocktrades:
Was that 300 milliseconds?
Bartek:
300 of milliseconds.
@blocktrades:
Okay, yeah, that sounds good. That's that's a great number.
Bartek:
Yes, I also think so.
@Mcfarhat:
So yeah block TRS regarding the hi nodes we've got the our latest server so we started syncing a couple of days ago it's it's a much stronger machine than the one I had before.
@blocktrades:
Okay.
@Mcfarhat:
Good. Yeah, I'm really happy so far. It's a 128 gigawatt 7950 x 3d.
@blocktrades:
Oh, wow, that's going to be super fast I've worked on. We do. That's our fastest machine here is what the one you're describing except you got 128 gigabytes on ours 64 gigabytes.
@Mcfarhat:
We're the strongest guys now.
@blocktrades:
I think you are. That's that's and that's a generation above the one we're using for our production we're using for production we're only using 5950 so or 5900 actually. So yeah, you're that's going to be. I know I can tell you sort of the speeds you're going to get because I do know what we get on ours. So, you're going to have around 14 hours for half to replay and it's only like two days after that for everything so you should have a full replay.
@Mcfarhat:
Yeah, I did start I did the sink though not a replay it took me like oh yeah it took me like 24 hours.
@blocktrades:
Yeah, that's reasonable.
@Mcfarhat:
Yeah, and then for the indices like five hours or something.
@blocktrades:
Yeah four or five hours is normal for the indices. But you're but you're really going to see the boost is going to be a high mind. It's usually like around round I would say 80 hours and with those with those machines, it comes out for us in under two days it's just under just around 48 hours.
@Mcfarhat:
Yeah, I'm hopeful I'm hopeful it's been it's been to now like 20 26 hours it's been doing the hive mind so if it concludes in less than two days I'm more than happy.
@blocktrades:
It's pretty impressive, considering how long it used to take even before it was, you know, we had a lot fewer blocks.
@Mcfarhat:
Yeah, yeah, yeah. But, but a couple of things I noticed I don't know why when I'm checking the, the admin interface. It says, I mean, first, I cannot access the node via API directly I can access the admin interface HA proxy or the admin slash versions. There's an error on the caddy I think it says unhealthy on caddy. I'll share the links with you on.
@blocktrades:
Okay, it would be unhealthy right now, because you're not.
@Mcfarhat:
Not fully synced.
@blocktrades:
I think so, but send me the links out. I'll take a look. I don't usually look at caddy too often to be honest are you looking at HA proxy on you're looking at, you're looking at HA proxies results.
@Mcfarhat:
Yeah, the there's the, the admin interface that says
@blocktrades:
That goes to a proxy.
@Mcfarhat:
Yeah, and then.
@blocktrades:
Oh slash versions.
@Mcfarhat:
Okay, yeah, okay. It's saying unhealthy about a couple of things so including a caddy and let me see half is good. HA proxy is good. Health checks is fine. Hive mind is fine. There was one thing other than that says unhealthy. Oh, no, it's gone. It seems. Okay, I don't know. Okay, so it's only caddy on a way to the half a post address. It says unhealthy. I don't know. Is it because we're syncing hive mind. I wouldn't know.
@blocktrades:
No, it should already be healthy. So something something is strange. We'll talk about it offline.
@Mcfarhat:
Okay, okay, I'll send you the links.
@blocktrades:
All right. Yeah, sounds good.
@Arcange:
About API. I still have one question. I'm still running RC8 to deploy the latest version. Do we need to replay or not.
@blocktrades:
If you're running RC8. I'm trying to remember that wasn't the last one right that we went RC9 after that. Yeah, RC8 I do not know I will have to. It's I'm sorry I can't remember really clearly. I'll have to go back and look again at my some of my notes and I'll get back to you on that offline.
@Arcange:
Yeah, okay. Thank you.
@blocktrades:
Sure. Yeah, the most likely issue could be something with half block explorer hive mind. You might have to replay that I doubt you'd have to replay hive but I'll check I'll definitely have to check.
@Mcfarhat:
I don't mind applying anymore. It's fast.
@blocktrades:
Yeah, you got that machine. We have two of those machines and I absolutely love them. They're just so fantastic.
@Mcfarhat:
I mean, you look at it running and you just it's like it doesn't care. You just open each top and and most of this is smooth. It's everything is functional.
@blocktrades:
Yeah, it's a it's a really great those are really great machines and they've got a new I'm really looking forward to the new generation suit because somehow they're supposed to be much faster. I'm like, how do you do it? But okay, we'll see. Crazy technology, man. Yeah, seriously. It really is. Okay. Anybody else have any questions or anything might be out of topics.
@Howo:
I have some questions for I can if you have time to talk. If not, we can take it for the next time. I'm not very in a rush for this.
@Arcange:
It will be for next time because I have an appointment in a few minutes. So just go on me if I can reply. Just ask.
@Howo:
Well, it's about if the badges you manage are open source and if we can integrate this into the, let's say, regular stack like I've made it a half, etc.
@Arcange:
I think we will need to talk about it later I answer.
@Howo:
Sure no worries
A possible two megabyte block size is super exciting!
Awesome to hear you guys chat openly, even if I don't understand much of it, it's good to know you guys are working hard.
It’s been a long time since I heard from Arcange
Good to see his name here and I’m sure he’s doing well