Core dev meeting #63

in #core9 days ago

Apologies for the delay It's been a busy few weeks

@mcfarhat

Okay, we're recording. If you guys want to start, it's up to you.

@blocktrades

Okay, awesome. Okay. Well, since we think how possibly is not going to make it today, because of the time zone change, this is Dan, our otherwise known as block trades. So I'll start off the meeting discussing what we've been doing lately.

I guess, first of all, on the Hive D front, the basically the blockchain node software, we've been doing a couple of things of interest, I think, to everyone. First, there was a request to increase the transaction expiration time, which previously, basically transaction would only last for like an hour, once it was broadcast.

And this was a problem for people who wanted to do multi-signing, because we often do multi-signing out of band, and you might have to contact some people and wait for them to sign and then sort of pass it around to get all the signatures. So it was requested to increase the transaction expiration time to like a day.

And in theory, that looks pretty easy to do is just a few lines change in the code. But we also wanted to be sure that this didn't cause any kind of problems, because obviously the time it only was set to an hour for some reason, presumably. And so we looked into it, and we did find that there were cases where this could increase in transaction expiration time could allow for someone to attack the network easier by creating a huge amount of transactions that add up space and specifically memory on people's nodes. So just to figure that out, of course, it was a lot of work, because we had to basically set up a test where we basically could literally do that kind of, I guess, flooding of the network, which isn't diseases it sounds even do.

We had to write special versions of HiveD that were capable of generating more transactions and processing more transactions than than HiveDs do right normally right now. But we were already been kind of working on that for some of our other testing for like future performance.

So we basically got all that working, and we're able to see that indeed, basically by flooding, you could increase the amount of memory that was used on a HiveD node by several gigabytes. And so to sort of resolve that problem, we made some changes the way RC calculations are performed under flooding conditions so that the transactions get progressively more expensive temporarily and under a flooding condition, and therefore they'll get dropped by the node so that they don't eat up memory.

And we also set limits of where these kind of triggers happen. This doesn't affect the actual final RC cost when it gets put into the blockchain, but there's like, when you first process a transaction, there's an RC cost that's temporarily, basically, it's only calculated on the node, it's not actually done as far as, oh, okay. Okay, Gandalf said he's having some problems with sound, but I don't think it's impacting any of us. So

@mcfarhat

No, I can't hear you well.

@blocktrades

Yeah, okay. Good. So basically, this is really a change to the RC calculations. It's just a temporary one at the nodes themselves so that they don't eat up too much memory under these flooding conditions. And again, this was very speculative.

I don't know that we would have this problem happen anytime soon, because it took us even a while to be able to generate the conditions, but we want to be safe as possible as we make changes like this. So it's sort of future proof just against flooding attacks in the future.

So I think that's another positive for this change. So this change will happen as part of the next hard fork. I've been planning the next hard fork for December, and that's where it's currently set. But there's one more change I really want to get in. So what I'm thinking to do right now is everything else.

So we've got two sort of separate sets of changes. We've got the hard fork changes, and we've got all the changes related to the API nodes. And the API node changes are things like updates to HiveMind, update to the block explorer, balance tracker, all these half the account history, all these kind of things. And so those are quite separate changes, and they can be deployed separately. So I'm thinking, as of basically today, I started thinking about it.

I still want to keep a December timeframe for upgrading all the nodes. So I think we should do the upgrade to all the nodes in December. Everything looks to be coming together as far as testing goes that will be ready in December for that. But then I want to push out the release, the hard fork date, so that we can make some more changes to HiveD prior to the hard fork.

So I'm thinking first quarter for the hard fork itself, but API node release in December. It shouldn't put any more real trouble as far as node people running nodes, because basically, there won't be any kind of replay once they upgrade to the hard fork version. So basically, everybody will have to do a replay for it in December, and then there shouldn't be anything required in the first quarter to do the upgrade to the hard fork version of HiveD itself.

And the main thing I'm looking at, the reason I'm sort of delaying the HiveD is because I want to make some changes to the signing. There's been some requests for changes, and I also want to make some changes of my own to the signing. And also, as part of that, I also want to release another half app, which can be basically deployed later, which is the Light Accounts app. And that one's a little behind, because the guy who had planned to work on that has been tied up with another project, and he's just finally coming free now.

So I've already made some posts, which kind of cover what we've done lately, but I just want to give a few more overview quick highlights, and also sort of update since that post. So like I said, there's not a lot in the hard fork right now, changes that I think are too caused to many issues or potential concerns. The biggest one is going to be this increased expiration time, and again, that'll be first quarter.

But we've done quite a bit on the half side of things. And so we basically rewrote the loop for how half apps work, and that was done in order to make it more difficult to make an app that had problems. One of the things when we were developing half apps, we noticed it was possible that if you committed, if you made commits to the database at the wrong time, and then your process was interrupted for some reason, your app got, you know, say it crashed or something like that, then it might have a problem being an improper state when it relaunched.

So we redesigned the loop that half uses so that it's very difficult for someone now to write a main loop in their app that will have that kind of problem anymore. And as part of that, we've tested this on all our half apps, the new loop, and it works great. We haven't had any more problems of that sort since that change. Another thing we've done recently is we're currently making tests to switch to Postgres 17 from 16. And so I'm running tests on that right now, and I'm doing benchmarks, and so far the speed looks, it's the same.

So we don't have any problems, as far as I can tell so far. The only thing I still, I'm still replaying Highbind, but once Highbind's replayed, the last thing I'll need to do as far as benchmarking is we'll test it with the production data on the query side to be sure none of the queries themselves are slower. But the replaying, the sync time, all seems to be quite good so far with Postgres 17.

So I don't anticipate any problems with the move to Postgres 17. Another change I'm also testing in half, which I guess is the other big final change I think we would like to get in half before the release, is we've shifted the data that's stored in half into a separate schema called half D.

So basically all the data and all the code, the sort of API, we're all stuck in this one schema called Hive before. And now we separated it into two different schemas. One contains the data and one contains the API. And we did this to make it easier for us to generate upgrades between versions of half so that it would, because it's been sort of troublesome sometimes to do an upgrade between two different versions of half, at least easily. And this should simplify that process of making an upgradeable version.

Let's see, so that's kind of most of what's going on in half. The other, I guess one other project we've been working on for quite a while, which I've mentioned in passing, I guess it's been about two or three months now, is we've been basically rewriting the server side of highmind. It currently uses a combination of SQL and Python code to respond to queries.

And that had a couple of disadvantages. One SQL code just tends to be more efficient on a database server than Python code. And two, it made it difficult for us to use a Postgres T server to serve up the API calls. And for all our other half apps, we're basically using Postgres T servers now instead of Python based servers, because the performance of Postgres T servers is much better.

So we've finally, just as I guess today, finished the conversion of all those apps to pure SQL API calls. So next, we're going to be benchmarking that specifically with Postgres T and seeing, you know, checking the new performance of everything. This was also important for another reason, not just for the performance advantage we'd get, but also because we're also making a move, as a lot of you guys know from the JSON API, JSON based API to REST based API. And so in order to, our preferred way of doing those REST APIs is using Postgres T.

So we really needed to switch the server to Postgres T in order to start the move to using a REST based API for HiveMind itself. We kind of completed the REST API for all the other apps. We've done it for reputation tracker. We've done it for balance tracker. We've done it for half. And we've done it for the block explorer.

But we still haven't done, we still don't have a REST API yet for, for HiveMind. But now we're, we're finally in the position where we can start on that work as well. We're also going to, I guess as part of that do some analysis of all the API calls that we've been working on the existing ones for HiveMind and see if we can make, make some of it a little bit more logical as we make the move to REST. So that's kind of what's going on in HiveMind.

The other apps I mentioned, like I said, the other thing we've really been doing is getting all the REST API calls for those apps that we previously had JSON RPC based calls as well. And so we've, I've tested and I've published that and we've got a server up now where anybody can test the new REST API calls. It's all doc, it's the move to REST is also allowed for us to document the API in a sort of interactive way using swagger.

So now it's really easy if somebody, if a dev wants to come along and see what the API works like, they can basically just go to those pages and actually make interactive calls and just test and see what the calls do. And you know, what happens if you make a change without having to write a lot of code, they can just sit there and tinker within the pages of the swagger, which is really nice, I think.

@mcfarhat

You know, not to interrupt you, Dan, funny enough, one of my new devs. I did not mention swagger. I don't know how they were researching something and they just brought me saying, okay, this is how we call this thing. And I said, how did you come up on swagger? He said, they're just Google things and I found it online. It was, it was amazing. I love this.

@blocktrades

Oh, so he found it, he found it online.

@mcfarhat

Yeah, yeah, yeah, found it online while Google is stuff. Yeah, that was amazing.

@blocktrades

Oh, that's cool. That's quite cool. So yeah, I think that's the switch to swagger is really, I think, going to make a big leap forward for our documentation process for the whole hive API. And it kind of forces everybody to do it too.

And then in the semis, this is the idea is now ever all the documentation will have a kind of standardized format, which I think is, you know, really important, especially as we get more devs working on different projects.

And I guess it's a good time to mention this was also the past past little bit, we've also been slowly transitioning development effort for Block Explorer over to McFarhat's group. We're still doing a little bit work on our side, but we're in the process of finishing up and finalizing the head off of all that code to his team, which I think is great.

It gets another group that's be very familiar with the half development process, and also just allows us to free up to work on some other projects as well.

Let's see what else. There's a, we've been done a bunch of work on the front end side as well. And I've kind of covered that also in my post, so I don't want to get into that much detail.

But I guess I'm trying to think what should really cover there. Maybe actually, before I go to that, I guess the other thing I want to talk about a little bit is the state of Wax. Wax is also getting quite close to release, and that's still time for December release as well. And Wax is basically our new library for using all these APIs, the new REST APIs especially.

And so the most recent thing we're doing in Wax is we've been building a health checker. And in fact, I've actually, if Bartek's able to talk, I might ask him to cover the current state of the health checker, because I haven't had a chance to check with him on the current state of it.

But basically, I'll just sort of describe what it's for first. The health checker is basically a sort of some code inside Wax that allows you to chest the state of the various API servers you're using and select which API servers you're using.

And so it can basically allow you to switch back and forth between the ones that can be the best performance. But I'll let Bartek speak and sort of describe the features more.

@BW

Yes, actually, this is some part of library and let's say a class which allows you to register API end points and some servers which we want to examine with such calls.

It is possible to register using this tool, some custom validators. So a programmer which would like to integrate this tool in its application can write some validators and check that responses received from given servers are matching the expectations.

And this tool is periodically calling registered and defined and requests to specified servers and calculate some score over given endpoint. And next notifies parent application about some changes in scores in best end point, worst, etc. Actually, exact today, we completed another stage and improved internals of this tool and added finally added support also for REST calls.

So both types of calls can be examined by this tool. And one of our guys working still on Block Explorer is trying to write some UI components which integrate this tool and will be first used at Block Explorer site. As I know, works are progressing quite good. Probably tomorrow we could have seen some UI of such help checker component and maybe we will publish some results about it.

Our idea is to share this component to other applications, especially developed also here, for example, Dancer. And maybe actually every application using Hive Calls could use it to verify end points and have some common support for that.

This component also uses Wax support for making API calls in an object call style because Wax allows to define some object structure and then use API calls as a regular object method. So that mostly simplifies using of that. And actually that's most of information about it. Maybe you have some questions.

@blocktrades

Yeah, actually I do have a few other guys made too. So first just correct me if I'm wrong about any of it of anything. As far as I know, you can basically specify a node that you want to talk to for either a set of APIs or even maybe a specific API that calls.

@BW

Yes, it is possible to define a set of APIs for given node and single one also. Then results are collected for given API method and the best node is selected based on set of methods.

@blocktrades

Okay, and so what's the metric it's using for best node? Is it terms of latency? Like how fast?

@BW

Yes, the tool is analyzing response times and actually we're quite limited here because we can only use APIs available in web browser which drops most of the internal support in network communication.

But it was possible to collect some time metrics specific to making calls. And of course also important part of such metric is correctness of given node and actually support for given API method. That part is covered by registered validators which are defined by programmer.

So we can, for example, register, find the account call and verify that specified node, recognize, for example, block trades account or GTG or everything needed and accept and allow to select such node only if it can correctly respond to that.

I hope it is quite nicely designed and easy of use. It uses standard patterns commonly used in front-end development like event emitters and so on. What more?

@blocktrades

I guess I had a question. So it's basically verifying a node is working based on passing a validation. Is that periodically performed and if so, how often?

@BW

Yes, it is periodically performed and probably the calls are made once per few seconds.

@blocktrades

Okay.

@BW

Actually, this is some configuration parameter specified as constant. So we can easily check frequency of such verification. Chests are also made concurrently. So in every method is concurrently called. So whole check of set is not so time consuming.

@blocktrades

Nice. Okay,

@Brian of London (Dr)

I get that. A quick question. Can I jump in with one? Sure. It's Brian. I'm just about to rewrite my back-end Python stuff. How close is wax and these kinds of things being to a module I can just, you know, pip install like a replacement for Beeman for Lighthide. Are we close to a release or is there a pre-release? Is anybody putting it on PyPy yet?

@BW

We are progressing to make Python wax possible to release. Hopefully we, actually, we lately completed some important parts to that and there was developed some module called Bikipy.

And that's Python wrapper over our Bikipy tool, which is required to sign transactions. And it was first part needed to start working on object definition of Python interface in wax. I hope we can start designing this interface, maybe even this week. It depends on other works specific to our tool clive, which uses a lot of our Python resources.

But I'm still very focused on starting that work because I know it is very important for developers. And we will try to prepare some initial version of that, maybe even in this year.

So probably most definitions, most design of this Python object interface will be similar to Type Script version, which was designed quite long and actually tested already by several applications. And I hope Python users patterns can be really close to that users and we can try to design this interface in similar way and avoid reinventing the wheel in this part.

So I hope that's only specific to implementation, not too much time needed for design of this interface.

@Brian of London (Dr)

Thanks for that. If I can help at all, send me a direct message or something. I don't know if my skills are up to it, but because when I looked at wax itself, it kind of isn't in a format that I could figure out how to call it or just make any use of it.

So if I can help, if someone can write a scaffolding, I'll fill bits in as needed. I can maybe put some time into that. So thanks. Okay.

@blocktrades

So just to clarify that for I guess probably everybody knows this, but basically there's two versions of wax. There's the Type Script version and the Python version. And the Type Script version has definitely been our higher focus of the two because it's the web version.

And there's just so many more web apps in the based apps than Python based apps for in the hive environment. So, but we are, we do have a wax version of Python as well. It's just, as Bartek's mentioned, the object oriented face isn't really that there to this level that is like in the Type Script form. And we are testing both versions of the of wax.

We've been using Denser, which is a replacement for condenser. And we've been using the half block explorer to test the Type Script version. And we've been using Clive, which is the Python base wallet we've been developing to replace the old command line wallet. And that's the one that's testing the Python version of wax.

Okay. So, yeah, we don't, we are a bit behind on the Python version of wax. And so that's basically, you know, we only can guess at this point, but we, we would certainly like to be at a release it in December if possible. So we'll keep our fingers crossed that that process goes smoothly.

And Brian, thanks for your offer of help that certainly may come along as we get further into it. Uh, let's see. What else? Covered wax, I guess, just briefly on the UI stuff. Clive, which is the command line wallet, it's been going pretty radical changes, a lot of it based on my request for changes the way the UI works and everything.

And it looks to me like progress is going pretty reasonably fast. So I think Clive itself is likely to be releasable in December, but I'm not going to guarantee that one yet. But certainly I know a lot of the changes that I've asked for look like there should be finished sometime this month.

So then I'll start testing it more personally again. Uh, I don't know, Bartek, anything else you could think of I should mention or share. We want somebody else.

@BW

Actually, as the UI work, we can mention that Dancer guys say that actually only a few features are missing in the wallet.

@blocktrades

Oh, that's good to hear.

@BW

To complete to complete whole functionality of the application. And I hope such missing features will be completed soon. I hope in this month. So we can be ready to try to switch at the beginning of the year or even during this December release to Dancer too.

So probably that's also a good news. And actually, what more high vent work we have covered. We are also working on using Python version of Bikipie and parts of Python wax. That can be interesting for Brian in our internal testing where we massively are running a test on CI.

And that also shows us some problems on bugs which are fixed. Of course. So after completing that, what also What also was done in last week, some big step, some milestone was done in this part makes us closer to to Python version of wax and her process being done to start creating transactions and sign them. So I also hope it is important information. Yes, I think we covered most of things. I don't have.

@blocktrades

Okay, I guess I thought of one thing which was we've also I've been testing. It's not a new thing for us, but it's kind of new for everybody else still I've been testing the block log split stuff. And that all seems to be working just fine. So I'm running all the nodes. I'm running a but I'm basically doing a lot of half API node testing right now.

It's primarily thing I've been doing lately. And it's by default it now splits the block log. And so I wound up with basically one block log part for every million blocks. So I've got like 81 or 90, 90 files in there now, each one of the million blocks long and all that seems to be working smooth. So no problems there.

And I'll probably start testing with some of the lighter net ones, where I reduce the number of blocks as well, just see how that works out as well. But I assume we're not going to have any issues there. But anyways, that that functionality seems to be just fine. So I guess anybody else want to cover any work to do in right now? Make far hat you want to go?

@mcfarhat

Yeah, sure. Thank you. Thank you for the updates. Alright, so as many of you are aware, we've been helping out with a lot of work on block explorer UI for the past few months. And we're trying to put more effort now with more resources assigned. So I'll share maybe a few updates on some of the things we've been doing. Mainly we've been focused on more functionality available on the user profile page, things like adding the account value in dollars, delegations count, adding sorting by RC delegations by HP delegations for the recipients of delegations, fixing few UI changes like the date picker, light dark mode, which was introduced recently.

There was a bug that we just detected on vesting ratio values, which I think one of my devs just fixed. A feature suggested recently about transaction differentiator, like having a perspective view if you are on user profile, I think Lucas just finished working on this today. We're also my other dev, I think, is almost done with also displaying when you have a user who has proxy.

So we're just displaying now the votes that the proxy actually votes for, you know, in a kind of differentiated manner, just to say, okay, if you go to this user profile and you see this guy is voting for this, uses this person as a proxy, then you'll actually see who the proxy votes for.

And even if you go like three layers of proxy, which is the blockchain's limit, you can actually see this proxy has this proxy has this proxy, and eventually these are the persons who this guy's votes for. So I think this is helpful if you want to see directly if you are voting for someone and you want to actually perceive what are the who this person votes for, this will make it much easier. Where we also started working on, we recently introduced the witness schedule.

I think Lucas worked on this, or maybe Jacob, the witness schedule, but it was missing the backup schedule. So Lucas is trying to look into how we can actually integrate this, if it's possible, if there is an API endpoint, so that we could make this happen. Otherwise, we're going to delve into the backend work to maybe introduce this functionality.

So kind of like how other witness schedule sites display. So we want to display what is the schedule for all the backup witnesses, you know, in order.

@blocktrades

Yeah, I think just as information that I think that's probably information is probably coming from HiveD rather than half. So there probably needs to be, there might have been some modified call to a new API call on the HiveD side for that. I don't know if Bartek, do you remember about any chance? If not, I can look it up after the meeting.

@BW

Yes, I think a witness schedule can be received from HiveD, not from Block Explorer APIs.

@blocktrades

So yeah

@mcfarhat

we're talking about the...

@BW

I can check which API it is, but if we see that.

@mcfarhat

Yeah, just to leave you, this is the backup witness schedule, right? Not the current witness schedule.

@blocktrades

Yes, yeah, yeah, so yeah.

@BW

Yes, but there is probably some parameter to specify which instance of witness schedule you want to get.

@mcfarhat

Okay, okay, great, great. Yeah, if you have an insight into this, I'll just share it with you.

@BW

Yes, and this, sorry, I break you. This parameter is called include future and that's Boolean value. So get witness schedule, it's a call name and include future is parameter. I will push to hear some information as some comment.

@mcfarhat

Okay, excellent, excellent. So another thing was that we had a pending task also to display the last block produced by the witness and the witnesses page. I saw a comment a while ago from you, Dan, that this was maybe finished on the back end, I'm not sure. So if it is available, maybe we can look into adding this now. Do you remember?

@blocktrades

Yeah, I don't, honestly. I mean, it certainly should be calculatable from the back end, that's for sure. But I don't remember these, I'll look on it after the meeting though.

@mcfarhat

Okay, okay, excellent, excellent. So on another end, I mean, we're starting to work to move more on balance tracker also, thanks to Michael, he's helping out my new resource also was working on the back end. I had also another question about the half BE development version, the half BE API. There was a change done three weeks ago about fetching the witness list.

And if you're using the current 0.6 version, it doesn't work because the change is on development, I think it hasn't been pushed on any public API. So is there a development API that my devs can query or?

@blocktrades

Yeah, sure. Yeah, that's sempia.com is one.

@BW

We have internally in the company some instance which is supporting this new version. But I'm not sure if instance which Dan is publishing is upgraded to that version. So probably it will be soon because Dan is running some new tests there. So probably it will be switched soon.

@blocktrades

Yeah, but we did upgrade api.syncad.com relatively recently. I don't remember exactly how long ago it was.

@BW

Maybe it was not upgraded yet because it would break our official instance of block explorer because our public instance available at url explore open hive network is using still old calls. And the testing version is using new ones. So maybe that's reason why this api.syncad.com is still supporting old version.

@blocktrades

So is explorers using api.syncad.com?

@BW

Yes, by default.

@blocktrades

I think. I thought it would have been using, I thought it'd been using hive.blog, api.hive.blog.

@BW

But we are talking about REST APIs. api.blog, they're not support.

@blocktrades So it looks like to me, so no, I mean, I'm looking at the explore.open hive.network right now and it's using half b open hive.network and api.hive.blog.

@Arcange

That's what I'm seeing as well. Yeah, true.

@blocktrades

It's not using api.syncad.com. I don't know if api.syncad.com is even exposed properly for this kind of calls. I mean, I know it is for this all the swagger stuff, but I don't know if it's up for serving data, but we can work this off. We can work this offline together.

@Arcange

Okay. Okay, great. Let me see what else I have. Okay, there were actually a couple of questions I was chatting to Voltec recently and he was asking if the multi-six support for BLS is coming anytime soon. Do you have any update on plans for this, Dan?

@blocktrades

So, yeah, I mean, I think I mentioned it somewhere in my post, but this is all part of the stuff that I want to push into the first quarter. That's basically all the key signing stuff needs to be looked at.

It's in more detail, so that's probably going to be first quarter when we do the hard fork release. Any such change will be then. It's not going to be in, certainly not going to be the December release.

@mcfarhat

Yeah, yeah, okay. Fantastic. Fantastic. And another thing I heard in your update that the wallet on Dancer is coming close to completion. He did ask me about adding maybe support for VSC transaction data on Dancer wallet, kind of like what Pik did. Do you think this would make sense to have this soon?

@blocktrades

Sure. I mean, I don't really know what's involved because I haven't, this is for VSC transactions.

@mcfarhat

Yeah, I asked him to give us more details because I don't know what's involved.

@blocktrades

Yeah, I don't know either.

@mcfarhat

Yeah, I think maybe it's some kind of like the tokens created there, or I don't know if it's anything about the smart contracts. I did inquire about further details, but yeah, I think it might make sense once everything is done, maybe to start introducing some of these.

@blocktrades

Well, I mean, it sounds like it can be done on the side, right? So I think that can be separate here. So yeah, I think it's pretty much an independent feature. Yeah, if you want to look into that, that sounds great.

@mcfarhat

Okay, excellent, excellent. Yeah, and I guess this is really it from my side.

@blocktrades

Okay, anyone else want to go?

@Arcange

I have a quick question. I created an issue more than one year ago about emptying savings. When you have HPD in your savings, you want to empty it, then you make a withdrawal, but you still have interest to claim. So later, if your saving is empty, you cannot claim your interest because you need to change the balance to make it possible.

So it's quite complicated. I onboarded a few users for them to store HPD, a few HPD in their savings, and they all come back to me saying it's a bit annoying because I need someone to make a transfer. Then I get some interest, but the interest is in the saving, and it starts to generate new interest. And then when I empty it, they are still interested to claim.

So actually, it could be really easy to just say, okay, when someone withdraws everything from the savings, just claims the interest and make it empty, and it's done. I've checked the code. So the Hive code, it doesn't look very complicated. I would like to know if it's worth for me to spend time to maybe make a change and pull requests?

@blocktrades

We don't want to do. I don't have any strong objection to it, I guess. I just didn't think it was... I'd think I saw this issue, but I just figured it was such a small amount that would be involved that it just wasn't that significant. Because presumably, you're not going to leave a lot of interest there, no matter what you do.

@Arcange

Yeah, it's not a lot.

@blocktrades

I just didn't figure it was worth dealing with, but if you want to make the change, and you think it's pretty straightforward to clean up the change, I've got no objection to you doing it. Okay, you know, it's just to make the change clean.

@Arcange

I know it's not about the amount.

@blocktrades

Yeah, whatever. I have to measure work. I always say, what's it worth?

@Arcange

Yeah, so my idea was do it, and then what's the better? I never did things like that. So what's the best way to... what's the procedure to do? I'm not used to Gitlabs, so I think it's a bit different from Gitlabs.

@blocktrades

It's like a merger quest. So what I'd do first is you've already got an issue, you've already created it, it sounds like. So if you just describe your solution and the issue, if you haven't already, and then just make the... once you make a... you basically check out the code into a branch, and then you make the change, and then you push the branch.

@Arcange

I push a branch.

@blocktrades

And then you do a create a merger quest.

@Arcange

Okay, so the thing is to create a separate branch.

@blocktrades

Yeah, so you'll have a branch, you push your branch, and then you create a merger quest, and that'll basically attach to your branch. So anything... and if you're in the middle of the branch, you can create a merger quest that's called a draft, which means you're not finished, but you want people to be able to look at it while you're still working on it.

You're finished, you think it's ready to go, you can just mark it as... you can remove the draft, which means it's now ready for a final review, and then, you know, it'll get reviewed and merged, I think, okay.

@Arcange

Okay, cool. I'm going to give it a try.

@blocktrades

And you'll... it's part of that, you'll want to create a test too, you'll want to make the change, but you'll also probably want to

@Arcange

Okay. Okay, thank you.

@blocktrades

Sure. Anyone else? create a test to verify the change. Brian, did you want to make any updates, or you just have questions mostly today?

@Brian of London (Dr)

I'm good, I'm good.

@blocktrades

Okay, sounds good. I guess we're good to go with... it was good we decided to go without Hawo, because apparently he did get the time zone changed wrong.

@mcfarhat

It would have been a long wait.

@Brian of London (Dr)

He's going to show up in 10 minutes. Okay, thanks guys. See you next time.

@Everyone

Thanks, take care. Bye. Thanks, bye.

Sort:  

Hey @howo hello. I want to ask something about Hive.

Why do some of the hives disappear when powerdowned? For example, when an account with 100HP is powerdowned, why does it get around 99 hives? What happens to the remaining hives?

It gets diluted because of inflation.

Thank you.

A great meeting held, it's good to see the effort the team is Puting into making hive a better place. I really appreciate the team