Core dev meeting #53

in #corelast year (edited)

@howo

On my end, finished the community type stuff. I'm sorry about the performance degradation. I believe I shipped a fix. I don't know where we are on the testing about that. I sent the branch yesterday evening afternoon. And meanwhile I'm working on the next feature which is beneficiaries settings for communities. it's more of the same content policy for communities, let's say, where if you don't follow the certain rules of a community, then your post is gonna get muted.
PeakD has introduced this feature already, but it requires you to add peakD as a moderator, which is not ideal. So I'm working on adding that to the base layer.
I'm happy to discuss the performance degradation and all the things later.

@blocktrades

Okay, while running with your new version. We found a unrelated issue with Hivemind server setup that think that Deathwing ran into this problem when he was testing. And we ran into it too when we ran with the full stack. So we should have a fix for that today though. As far as your change goes, I think we're around 20 million blocks right now. So probably about tomorrow we'll know if the fix is working or not for the performance [future howo here: it did work], because it doesn't kick in till around the 37th million block or so.

Let's see what else... Bartek's out today, so I didn't really get a chance to check up with him. But I'm roughly aware of where everything's at. So essentially now we're to the testing phase. We've been testing on I guess about nine machines. Just test various variations of optimizations of how the stack sets up for the API nodes. And so far it's gone pretty well. We've found some nice ways to improve things as we go along, but we're still finding small things so we can do to improve things. The last thing we're kind of gonna do is start testing real world traffic instead of just benchmarks. And so we're gonna try a different methodology that we've done in the past. In the past we've actually redirected the traffic and the traffic's been responding to actual customers, users, whatever you want to call them. And of course that could be a problem if there's any bugs or performance problems. So this time we're gonna try things a little differently. We're gonna send the data to both our node, our normal node and our production node. And so we'll just be testing the production nodes performance but the results won't actually be feeding back to users. So there's no chance there'll be a disruption that way of anybody's use of this services in Hive. So that's where the new methodologies take us a little more time. So make sure we've got it all set up right. But I think we'll begin today, probably if not tomorrow, we'll have everything set up to go. And that's, for my mind, probably the last big thing we need to do is the real world tests, just to be sure no unexpected performance that really isn't possible to see until you have that kind of big mix of requests going to the servers. Trying to think what else, really is not much else. I think we'll, if everything goes smooth this last phase, I hope we can put out a new release candidate next week. We'll have two servers running, one that we can direct the real traffic to, another mirror node server. So people will be able to test on both of those. The mirror node server will be particularly important for anyone who wants to be sure that changes haven't broken anything in their application. So we're really gonna want all the guys who are running like peak D and the condenser and things like that to test on the mirror node so that they can find out if any changes is a breaking change. There's not a lot of expect that could happen for, but I've mentioned already in matter most, one issue would be that we're disabling the transaction status plug in favor of the new call.
Well, it's actually an old call, but it's just not been used as much, I think. And another thing that I know has changed is the ecency guys added a reblog field to one of their existing API calls. Again, I don't expect that should cause any problems, but it's just that kind of thing with a lot of small changes. We really want to be tested against applications before we make an official release. So I guess that's about it for me.

@howo

Anyone else have anything they want to share?

@mcfarhat

No, not really. Thanks for the update blocktrades. We're looking forward to it. You said this will be by next week, so we should be able to start rolling our testing for Actifit as well to call the...

@blocktrades

Just to clarify though, this is a release candidate that will be next week. But we'd like to... I think we'll be ready for everybody to start testing their apps on the mirror net, and that's definitely what we want. Everybody do as fast as possible, because that'll be the final stage for we can do a release.

@howo

I have just one topic on basically the... Making sure that performance degradation doesn't happen. Is there any plan on basically making a manual pipeline because running this at every occasion would suck too much? But I feel like, yeah, I guess whenever something gets pulled into develop, we could... I mean, even before, just before merging, we could have something like running a manual step. I mean, right now the thing is the pipelines don't run for more than five million blocks, right? So these long...

@blocktrades

Sometimes there's a performance problem like this one that somehow doesn't show up until 37 million blocks. So it's always been our plan to set up so you can run like a long run that runs, you know, we don't know, maybe all the way to head block, but something like that. And just, it can manual step so it's not automated. So it can just be like a final check you can do if you just to be sure of the problems. It's just not there yet. It's just a step in and done. It does gonna create obviously a lot more data and it's gonna take a lot longer to run. So it's gonna tie up services unless we were adding more build servers to sort of address that. We've got three right now, which we used to have two. And I think we're gonna add a fourth before long. So that plus improvements in the speed of CI and resources, things like that, we should, we definitely wanna be able to offer that as an option on the build systems for too long.

But yeah, it's definitely a problem. A lot of people don't really have systems that can easily run that long and...

@howo

Yeah, sorry, I thought about getting a dedicated computer but I didn't, I mean, to run all of that but I don't have the space yet to do it. Yeah, yeah, I understand. Because yeah, I mean, when you sent me the PG Hero query, I was like, okay, that's there, I know immediately. But because I cannot test myself right now, it's tricky.

@blocktrades

Yeah, so just to clarify what I was talking about, the new stack for API nodes includes PG Hero and PG had been running alongside the HAF database. So it's very easy to quickly sort of spot performance problems and things like that. We've been using those tools internally for a long time but now they'll be available to anyone who's running an API node and they can start to, if they have any problem with their server, they can inspect that and either try to troubleshoot the problem cells or at least give feedback to other devs to figure out what's going on.

Might be a short meeting today.

@howo

Yeah, it looks like it. Well, and I don't have any other topics about from that and what I'm currently working on. I'm still in the design phase so, yeah, I have a branch that's up but I am not done with most of the people to follow it. Because the issue is that pretty much everything happens in this one big query where basically the post gets ingested and so I need to change that as well. So it does warn me a bit about performance.

@blocktrades

So this is the query on the indexing side or on the... Okay, yeah, those are always the most tricky.

@howo

Yeah, it's like 200 lines long. I hid all of the logic away in a second function so it's more digestible for a reviewer. But in a nutshell, everything is going to happen in once because it has to be at the indexing time because that's where you set the muted field. But yeah, I don't know how expensive this will be in performance wise. So I'm still... I'm building the future and then I'll run some tests. I'm a bit more focused in mine.

@blocktrades

Yeah, and once you're ready, I can run a lot of long-form tests for you afterwards. Just contact me and I'll handle it. Yeah, this is where the reviewer... We can catch those earlier.

I'll mention something if no one else says anything. There was another topic brought up. Somebody wrote a new idea for a change in the way voting worked. And one of the other devs asked me about it yesterday. I read through the article and it seemed not unreasonable to me either. I don't know if any of you guys had a chance to read it or not, but it was a suggestion to change the way voting works so that if you vote once and then you vote your second time, it doesn't actually decrease the power of the vote. It just decreases your voting power that you have long-term. So it doesn't do that slow decay that it does right now where you vote the first vote. It's the most powerful second one. It's less powerful, so on, so on, so on. Okay, but then what's... Oh, and it just removes your long-term voting power and... Exactly. So it doesn't change much, but it's more consistent to your choice. Yeah, it seemed more intuitive to me. And they had some pretty good arguments, I thought, for too. And as far as I understand, it might be an easy change. I haven't looked at the code myself, but I was given that impression in that case. So I thought it was something we could talk about.

@howo

Voting is a bit annoying because everything uses the manabar system, so that's going to have to change. Because manabar is also shared among all of the things that it regenerates, including RC, so voting will have to have its own settings. I mean, basically, a new system will have to be coded to accommodate these changes. Or we can just set it like voting manabar versus RC manabar. But I don't assume it would be too complicated.

@mcfarhat

I haven't read the article myself, but I mean, would you know the main reason behind the vote value decay behind the first vote?

@blocktrades

No, I really don't. That's the thing. It was just always that way, and I've never heard a particular justification for it. In fact, the other dev was asking me what the justification was. I was like, I have no idea.

@mcfarhat

Yeah, it seems weird because, I mean, if you're giving value for your first vote, you're probably thinking that this first vote is the most important one. You're giving it the most value. But yeah, I mean, I don't think people actually think that way. Exactly. It doesn't make sense because, I mean, you might not stumble upon the first quality content and then you want to vote for it.

@blocktrades

Almost certainly you won't. Yeah, yeah. So, I think, I mean, if that's the nutshell of the change, it kind of makes sense to me. Yeah. Okay, so I think more than the programming asset right now, I just wanted to throw out the idea for anyone to think about and see if they had any, if they thought of any reasons why it was the way it was now, it made sense to them because it didn't make sense to me. Yes,

@gtg:

It's because voting back in the days was completely different. We had those multiple windows where you could vote. It was one-day window and seven-day window, even 30-day window, and so we might have been trying to protect ourselves from some abuse, but I don't know if it's still the case with voting that we have.

@blocktrades

Yeah, I mean, it's hard for me to think of what the abuse would be right off the top of my head. That's for sure. I'll kick it around and see if I can see anything, but nothing really comes to my mind.

@eddiespino

I think the value of the second vote depends on the voting power at the moment of the vote, but just spend, now you don't use the rewards before you lost them, but now you depends on the voting power and also the rewards depend if the post is in the first 24 hours or not, because if you vote and it's in the 48 or it's after the 24 hours, it's also really good.

@blocktrades

Yeah, there's a decay on time, and the other one, it is, in fact, we know why it is. You're right. It's the voting power change that makes a difference, but it's just, the question is, does that mean, I mean, what you said is correct, that's why it works. The question is, should it work that way? And I haven't heard a good justification for why it works that way right now.

Well, I'm not expecting an answer today, I just threw it out there. It's worth reading the article, I think, because they made some pretty persuasive arguments for the case of changing it.

New witness

Small announcement that will be its own post later: I'm starting my own solo witness, if you enjoy my work please vote for @howo in https://peakd.com/me/witnesses

Sort:  

Thank you for the updates and I learned a bit more amount voting mechanics.

Holy trip reading this...

Tbh, every single day i Enter here and read tryn to learn in the short acces i have to internet, but thats a good question about the first 24h of post load.

I mean, how can all post be voted by community heavyweith on time, not talking about 100 post we talking about millions i got a true love for StreetArt and this kind of stuff , for those communities it seem to be way less support, just saying, we all put knowledge in something specific and with 'more reward' is more effort and vive versa.