Core dev meeting #61

in #core3 months ago

@howo

So this is our 61st meeting I think,

@mcfarhat

I had no idea you were keeping count.

@blocktrades

I think you can only see some on YouTube, they have to be numbered.

@mcfarhat

Yeah, true.

@howo

It's the 61st, yes, congratulations. On my end, I am still looking on some of the hivemind stuff, trying to merge my stuff. The issue, I mean, it's what I spoke about last time. I need to basically do the merge conflict and the SQL has changed and so I need to dig into this. So I worked a bit into this, but then Krim came back to me and showed like, hey, do you have any news on the Reseta API? Which is something that I basically stopped doing three-quarter of the way, because there wasn't that much interest into it. And so I figured I'd finish this before moving into the hivemind stuff again. Also because I'm a bit tired of working on the deep PGS SQL stuff. And so I'm trying to wrap this up and then I'll finish the hivemind and then I'll be free-ish for new work and I don't know when I'll pick up then.

@blocktrades

Okay, cool. How's the Reseta stuff look? I mean, you think it'll be done, do you have any idea how long this could take to finish it?

@howo

I don't really have the issue. I'm done with most of the endpoints, but there is some stuff that I'm struggling with. Mostly submitting raw transactions where they want you to be able to build a transaction offline, sign it, and then submit the payload to an API.

@blocktrades

Okay, so you're talking about hive transactions?

@howo

Yeah.

@blocktrades

Okay, you might want to look at the wax stuff that they've got.

@howo

Yeah, for sure. The issue is not very familiar. So I'd need to do some extra learning on my end. So there is a bit of that and there is also all of the actual load testing, aka like spinning a real node, a real fill node, and making the API work in real life conditions because I've only tried it in a testnet. So I don't know how performance-wise it's going to go. And for that, I need to spin up a machine. I don't have a machine, so I need to buy a machine. Anyways.

@blocktrades

Gotcha. It might be to do something on a mirrornet too, maybe. And I've also been trying to get a mirrornet set up.

@howo

Oh yeah, how is that going?

@blocktrades

I think you ran into some problems with the power outage.

@gtg

Yeah, I'm in the middle of starting it back again. And you can imagine how painful it is when it comes to thinking hivemind from the start.

@blocktrades

For sure. Been there.

@howo

Yeah, that's what I'm working on.

@blocktrades

Okay. I actually probably go last unless anybody else wants to go first, just so I'll have more time and because I don't know how long I want to talk. Anybody else want to take it next?

@mcfarhat

I can give some updates. At least on the work we're doing on the hive explorer. So over the past couple of months, we've been doing a lot of work helping out with the whole UI interface. We've added a lot of components to the account page. There's the summary wallet, there's the delegations, resting delegations, RC delegations. We've been working on some formatting, improvement to the layout overall. Witnesses page. We're collaborating with Dan and the team about different things that might need to be improved with the backend, with indexing, with sorting. So there's a lot of work being done there. We're trying to expand and maybe help out on other areas as much as we can. I had actually three resources available earlier, but now I have one, so I'm trying to see if I can add more. And I'm doing some development myself as well. So I manage the guys and I do some development whenever I can. Yeah, we're just trying to give some help on this end. So generally this is it about the help on that side. On ActiveFit, we're doing a lot of updates. We just pushed the iOS version update last week. We pushed an update to the web version last week as well, but we didn't announce it yet. We have an upcoming Android version, hopefully by end of month with more updates. Yeah, that was just quickly my update.

@blocktrades

Sounds great. And a little more update there at the end.

@mcfarhat

I think Dan, there's a couple of MRs I think they're pending. Maybe they just need now some modifications. I don't know if they can be merged directly.

@blocktrades

Yeah, you're talking about the ones you guys have done, right?

@mcfarhat

Yeah, true.

@blocktrades

Yeah, I left a couple of them open because they were more complicated and I wanted Jakob to look at those. I merged in some of them. In fact, I merged in one, I think yesterday, in fact. I think the ones that were the ones that I left open, I just kind of was going to ask him about and he was on vacation. He just got back.

@mcfarhat

It's vacation time, it's August, you would expect it.

@blocktrades

Yeah, a lot of vacation right now.

@mcfarhat

I'll try to add a few more issues, maybe enhancements to the blog explorer. Hopefully in the coming couple of days, we will see if we can add some more features or find some issues that can be resolved because the pipeline is getting narrow.

@blocktrades

Yeah, if you've got any more issues to create, that's also great. I need to keep another guy busy and working on that too.

@mcfarhat Yeah, Lucas. All right, awesome. And my guy as well, I told him I'll send you something tomorrow.

@blocktrades

Okay. Sounds good.

@mcfarhat

All right.

@blocktrades

I want to talk about what they're working on right now.

@gtg

Okay, I guess he was early.

@blocktrades

Yeah, kind of a new guy here so.

@sagar

Okay, hello. Thank you for allowing me to speak and allow me to go first. So, my name is Sagar. And in our language, Sagar means ocean. I'm from India. And this is the first time I'm joining this four days meeting. I'm working on the network projects, two of them are distributed and the three speak. Now let's talk about the three speak first. I'm working with low finance and with them to integrate the field speak so that they can have the feature to publish short content. And there are almost that upload part or video publishing part. Now they are moving to the next stage which is showing the short content within the video platform. And about that, I'm going to communicate and have them understand how to implement. And that's the same functionality which we already have in three speak application. So if you haven't downloaded the three speak mobile application, I will recommend you to download the three speak app. And within that we have like two more many applications. One of them is podcast application. And the other one is three shots and in your finances also trying to achieve the same feature functionality, which is the three shots that we have in three speak application. Working on that and of course, while we have these integrations, we tend to find the documentation. Then to find the documentation updates, I keep updating the document. And I provide the daily updates like on high blockchain.
I put out a post daily that are two accounts using which I'm publishing the post about which I will talk soon. Other than this, there are small small box fixes and improvement which I keep on doing on three speak, for example, right now I'm required to submit another bird to Google Play Store for three speak mobile application. If I don't do that before the 31st of this month, they will, I don't know. Maybe some, some strict action. So I need to work on it and pre upload a bird with the updated API target or something like that. Now this theater, this theater, I think they did an event yesterday or day before yesterday and it was a successful event. Approximately 40/50 members, they claimed the discount after paying in the activity and lightning sex. So after the event starters, suggested a couple of improvements and working on those improvements as well. Apart from that, apart from this improvements, there are ongoing deployments which I keep on doing on beta because so far business management, it's done using another website called wave you and we have a website called spend which which is like on a bit through the radio platform. So right now businesses are added on that. And after that, we can add it on the district. So right now what I'm working on is like the business management should be done on this theater directly. Business guides or administrators, they should be able to add the businesses, verified businesses on this theater platform directly. And it should go other way around like, you know, once we add it here on this theater, it should go and reflect on radio and spend activity as well. So that's what you're working on it's on data on top of that the other requirement is that like, you know, whenever I'm trying to add a business, I should be able to find all the place details like you know you zoom into a particular place this is the restaurant on Google maps you can see the photographs you can see all the details website for number and work timings and everything. So I've integrated Google places API Google photos API, and with that, we are trying to make a smoother experience a better experience for business guides to add a business and so far we have onboarded 300 businesses across the world. And in order to see like, you know, how many businesses, what all you have to do is just go to this theater.com. And then once you land on that from the left side, you would see businesses. And you would see the businesses which are added so far. And now if you are interested in knowing which country with city on top right corner. Next to the search button you find a filter. So you find all the countries over there. And this it is. So, these are the businesses which we have added so far. What's next in this industry as we're planning to add more and more businesses that's there. On top of that, we're also trying to support the online purchase. So the businesses with support online shopping. And let's say you get a delivery after a day. So what would happen after a week, what would happen so far the limit to claim it's it's around 30 minutes or up to two hours on you. So you're working on this theater to support online businesses as well. And these are two projects on which I'm actually working. Now, these are the projects coming under speaking. Now, on top of this, I have my own ideas, own applications, which I can say self funded, and which I would like to build for hiding the system. So I have this application called inbox or say hiring box, and it's heavily inspired by things against application. Engaged, and it's so that's one, and there are more applications like this, which I would like to build, like high polls, high work. What means many that you can work for a proposal or work for a witness and donate and chat. So these are the ideas that I have in my mind to build for the high platform and put it on App Store because if you go on App Store or play store so far we have just two apps, or three apps. You know, you can count it at fingers. I'm teaching. I want to speak and the sense now from this, if you say, do we have any other app on the app store or play store, we don't. If you want to have more users from web to platform to web three, which is high, we need to have a presence on web to platform as well. And that's what my goal is like we provide number of applications on App Store. So with that, we can showcase like we can give them a portfolio of applications that these are the apps that we are offering. And I know that we do have a list of apps on, on Hive IOB apps, but having app on App Store and play store also makes an impact. Of course, not for web three users, but it does make an impact to web to users. And that's, that's why I'm building this application. But if you guys think that instead of working on this ideas, which I'm like, no, working on my own, if you have some better idea, some, some ideas, if you have, feel free to share and work on it. Yeah, so that sums up my update.

@blocktrades

Okay, sounds good. I don't know if anybody had any immediate questions. I have one question. I didn't really understand. You mentioned short content. Was that like short content? I think that's what you said was that like short content like, like tweets or is it more like short content like shorts and video. Okay, okay. Okay, that's, that's what I think I assumed it might be videos probably videos but that's,

@sagar

Yes. It is videos.

@blocktrades

Okay. Anybody else of our games that you want to talk about anything you've been working on lately or

@Arcange

You're here about something I work because I'm still on vacation. But I just have one question you recently released the preview of the rest API.

@blocktrades

Yes. Yeah, I was going to talk about that.

@Arcange

You will talk about it.

@blocktrades

Yeah, I was planning on it. If you had specific questions you can ask.

@Arcange

Yeah, my question was, is there any work being done in coordination with what good karma was doing about revamping the wall developer portal. I've had any news about his progress on that that work but I would like to know if there was some coordination made.

@blocktrades

Not exactly. Basically, the, they're kind of, there's kind of like two forms of documentation I guess you could say the swagger docs are just kind of like, I mean, I think they're very useful. Honestly, I think they're the most important form of docs for in my opinion, because they give you like a nice easy way to test an API call very quickly right you could just go to the web page and add some parameters and see the results and for me that's often and it's got a short description. I really like the swagger style of documentation a lot, but there is kind of this other form of documentation which is more conceptual in nature right which is sort of gives you an overview of, you know, the whole hive programming environment and, you know, software development environment and should talk more about like, you know, how you'd go about developing an API, how you'd go about using an API, you know. There's, there's different pieces I think so that kind of documentation is, I think, I mean, the website we have now the high development website now kind of does both right it describes the API calls. And it also has more high level stuff that gives you general theory of hive programming. And so I think the swagger I see the swagger part is sort of replacing the existing function by function description. But it's obviously we still always need that hard to high level conceptual description and so that's where I see, you know, that we haven't done a lot on that side, we've been mainly focused on just the that the low level documentation for all the new API calls. I haven't directly talked to good karma much but I mean he was, I think he was, I mean, it's kind of two different efforts to because he's documenting the existing API, whereas we're really the swagger docs are all documenting the new rest API. So, with that maybe I'll just sort of that probably answers your question immediately and I'll just sort of go over what we're doing a little bit more. So, I was kind of wondering really what to talk about today because it's been a while since we met and a lot of stuff's been going on and I don't really think I have time to go through a lot of it. And I'm going to put a write a post, probably in a week or two that describes in more detail some of the stuff we're doing. But I did want to talk a little bit about the thing that arcane just mentioned which is the rest API stuff because that's quite new, and sort of part of the new direction we're going in. So, this all for me started, I guess, when we first started when I first got launched. We were all talking about redoing the API because the, you know, the existing legacy API that we had is kind of a bit of a, I mean it's powerful but it's a bit of a mess, and you know it's been created over time so we kind of talked about refactoring a lot of things and organizing it more and making it more efficient. And so, for the least of the past two years I'd say that's kind of where I've been focusing our efforts but it's been trying to do it in a kind of incremental way. So, the first thing I kind of wanted to do is get HAF working because HAF is basically a method of creating new API so that it makes it easier for anybody to create a new API for hive. So, before we had hivemind but other than that if you want to create APIs, there was no standardized way to do it. You could go into Hive D itself and modify Hive D but that was kind of problematic because every time you add a new API called a Hive D that increases the load on our network nodes and we really don't want to do that. So, I mean everybody kind of got the idea that what we really need to do is get the data out into a database and work with it there. But I didn't really want to have everybody creating their own different database structures that were incompatible and couldn't work together. So, the whole point of Hive was sort of create a common database infrastructure that could then be used by API nodes to quickly add new APIs to their stacks. And, you know, I think we've accomplished that pretty well now and we've got to, I think we've got a good flow. The last release we did with the HAF API node stuff really solidified an easy way to add new API sets to an API node server itself. And so, the next step is we're looking at is, so far we've, we have added a new few new APIs like Balanced Tracker and HAF Block Explorer and things like that. But we, so next kind of phases I wanted to move away from the JSON RPC style calls that we've had for so long to a more, I think a more natural REST based interface. So, we've been working on that for the past, I guess, several months at least. And that, that takes, there's several parts to this. So, the first thing is we needed to add these APIs to our HAF, HAF applications. So, we've added a REST based interface to basically Balanced Tracker, HAF Block Explorer, what else, reputation tracker. So, just as an aside Reputation Tracker is a new app we've created that basically just, it just takes out the one, the calculation for the reputation that we're in hivemind and brings it into a separate application. So, we've added a REST based interface for that and one more I'm trying to remember what it is. HAF of itself, I think I missed that one. So, even HAF itself now has a REST based application API and when we did that we also moved some of the more useful functions that existed inside the HAF Block Explorer API server into Hafa. And the reason we did that was those are all API calls that didn't require any indexing. So, one of the differences between the HAF application and all the other HAF apps is that there's no indexing required to get it ready. It's, if once HAF is indexed, once HAF itself is indexed, HAF is indexed. So, that's kind of a nice feature about it. It's very low overhead in that sense. So, you can get it going pretty quick. So, I mean, like kind of a minimal API node, HAF API node will have, can have HAF already out of the box. So, as much as possible, whenever there's a functions that can be written that don't require any additional indexes or tables, I've kind of decided I'm going to sort of try to move them towards the Hafa interface just so that those things are available immediately without replays of other apps. So, we did move several, I think, very natural calls from the HAF Block Explorer interface directly into Hafa. And if you've had a chance to look at the new REST API interface, you'll see there's several new such API calls available. I think everybody here's probably already seen, I'm guessing most have already seen the new interface, but I'm going to put it in here just in case anyone hasn't. And hopefully I sent the right link because I just typed it from memory. And yep, it's there. So, if you want to take a look at that, if you haven't already, there's a dropdown in the right, in the top right corner that allows you to pick the different APIs. And then once you pick an API, you can go read through the documentation for it and try out the function calls. Now, this is, in combination with this, the other sort of thing we need to do along with adding the new REST APIs themselves is a library to support them, of course, is nice and nice and useful. I mean, you can make the direct REST API calls yourself, but having a library that smooths things out is obviously convenient. And so we've been, that's the other thing we've been doing lately is we've been updating the WACS library to support the REST calls directly. I think we'll probably be finished that in, I think, the next week. This week, I mean, we're also just in the passing, I'll also mention we're still cleaning up WACS itself prior to its official release. I'm hopeful we're going to be able to release it by the beginning of September. We're moving really fast and just trying to get every, it's been working for a while and we're using it in a bunch of applications, but there was just several, several places where I wanted the interfaces to feel more natural and intuitive. Some of it's me. Besides the REST work is also just me pushing some of the guys to make it, I think, a more intuitive interface. And I think that's going to be done quite soon too. So then we've got a few weeks, I think, to do the documentation. So that's why I say it's probably be the beginning of September for an official release. And maybe I'm being optimistic, but I don't think so. So with that completed, along with doing that in order to sort of test WACS in the meantime, we're also have several applications that are using the new WACS version of the libraries. So in particular, the REST features that were being added. So we're sort of testing with actual code, if you will. We're mainly testing it with right now with the Block Explorer UI. So we've got a branch of the Block Explorer UI that's replacing the JSON RPC calls and WACS based JSON RPC calls with the equivalent REST ones. And we've also, of course, since we changed those APIs some the WACS that the REST APIs returns slightly different data than the old JSON RPC calls we had for HAF Block explorer do. So we have, we're having to change the Block Explorer a little bit to deal with the new data that's coming back. But again, I think there were all improvements in terms of the interfaces, much more logical and a little more efficient too. So beyond those, so that's kind of the, I think the most important thing that's going on right now is the rest, the movement to REST, the documentation for REST, the flow we've created for documenting future APIs using Swagger. And as part of that, we actually created a script that will basically scan your SQL files and you put in open API comments into the code and that's what is used to generate the Swagger files. And it's also actually generates the interfaces to the actual calls themselves, the SQL code itself. So basically this is guaranteeing that the documentation, the document interface, the documented interfaces will match the actual code interfaces. So what's the worry about those getting out of all getting out of sync. Let's see what else. So, I guess, first I'll just before talking about anything else we've been doing I'll just open up the floor if anybody has any questions about the whole movement towards REST. The Swagger documentations or wax itself or what we're doing there. I don't know if there is, maybe there's not, but I just figured I'd give a chance on that quick.

@Arcange

I just have one question. All those REST APIs, they are all served by one web server or each component as its own web server.

@blocktrades

That's right. So, yeah, so basically this is this is basically these are behind it's basically just a normal HAF API node server like you know everybody's running now. What we've done is there's just a separate Postgres T for each set of API so there's a there's a Postgres T for the block explorer, there's a Postgres T server for the balance tracker, etc, etc. And there's also separate right now there's also there's a separate Swagger. Well, there's one Swagger. There's one. That's wrong. There's one Swagger for each one. Another thing we did which I guess I didn't talk about was in order to implement some of the fancier REST type calls that weren't really supported dated lead by Postgres T. We added we added ability to generate rewrite rules for your for the web server. Right now it's in for engine X specifically. So, when the calls first go through engine X, and it'll rewrite the ones as necessary into so that then they go to like a standardized format that Postgres T could understand at the end but that allowed us to implement. You know calls where, for instance, you're, you're putting parameters into the path versus putting them in as query strings. So, as a simple example, you could do something like block slash, and then you can put in a block number and that's your whole URL and that's your query without having to include a query string to specify what the block number you're interested in this. It looks like Gandalf is showing some of this. So, yeah, you can see there's an example. If you scroll down a little bit, you can see the see that second one there says block slash block number. That's an example of one of those types. I was just mentioning where we the rewrite rules come into play. I think you're only able to get. Oh, actually, yeah, I think you're actually right if we have this is a full HAF node so you can get any of the blocks. This is a live block is for the one I was testing with for a while hit 5 million blocks but then we moved to the full production API server running here. So, any other questions about why we're doing this or what the benefits are.

@Arcange

Yes, I have another question. So, I saw in the API preview that there is one API to retrieve blocks operations. So, before we had, if I remember, I think I opened an issue is six different way to retrieve blocks. And sometimes with virtual operation on the operations, and there are structures a different way. So, I guess, this new API would replace the previews. And will it be as performant as directly.

@blocktrades

Yes, it is. Yeah, this is just a performant, if not more performant than anything we've had before. And yeah, it's basically does everything now. You can filter based on virtual types real real operations or you can specific operations. You know if you want to filter on particular operations you want to see as well.

@Arcange

Okay.

@blocktrades

Yeah, that was a good question so yeah that's yeah like I said this is one of the drivers for all this was the refactoring of the API and something more rational. And we're still open of course the comments if anybody sees improvements they want to make before we do the official release it's be better to do it before release and after but I appreciate anybody takes the time to go over and look at it even if you only see documentation changes to make that's also something to be quite useful to get any. I've reviewed it somewhat, but I still feel there's probably improvements can be made in the docs. So anything that you find confusing in the doc the swagger documentation, especially let me know. In offline you're not not here obviously but just file an issue or something and get lab or mention a matter most if not. So that's anyone else have I guess first anyone else have any questions about this stuff for a move on to other stuff we're working on. Okay, I'm going to take that is no questions. I guess we're also working on a bunch of the other apps of course. One of the things I've been involved with lately is is Clive Clive is it basically a command line and basically sort of like a cursor style interface wallet. So very old school wallet. And the idea behind it to replace the CLI wall the old CLI wallet that we've had forever, and also to have like a nice wallet that can be used for doing things like offline signing and something like this stuff like that which we've really had no really good solution for a long time for doing offline in a friendly way. And it also is going to have a lot more features somebody just recently was asking for the ability to for a user to easily create transactions with several operations. And so Clive can do that. I'm my involvement is mainly I'm coming in as a UI consultant to complain a lot about things I don't like. And sort of get some of the UI changed and that that progress is making pretty good progress there so you know I'm hopeful that maybe my high fest will will have something I think reasonable to show. We've we've been releasing alpha release is just for anybody who wants to help us out and test and stuff like that but I hope to get more active push people harder to do some testing once the next next wave of changes come out and probably next two weeks I guess. Let's see what else. So you guys probably also know that we're working on something called denser which is a condenser replacement. And so I think everybody here knows but just in case anyone listening doesn't know condenser is basically the code that drives sites like hive blog that's the actual code name of the code base. So that whole code base we've rewriting a new version with more modern technology that's easier to maintain and would be built off using wax so it's a good place for us to test the wax interface to. And as part of that we're also building a lot of infrastructure that will be useful to other applications. We built a beekeeper which is a technology for securely storing private keys. We built a HP off which is a sort of a separate sign in library for people to log in and things like that. And there's various other little apps that we've been creating along the way that I think will be useful to multiple applications and I'll probably talk about them more in my in my post. Other than that, we've hivemind is we did do a we have made a change to HAF to simplify creating HAF apps and reduce the chance that somebody can make an error when using HAF when they're designing a HAF app. I mean, so we redesigned the loop that the main loop that apps use to process data from the blockchain and put it into their tables. Just avoids issues that could happen when people are doing commits and things like that and get interrupted the app gets interrupted it will be interrupted in a more stable state. And along with that we've been so we've been make we've done that in HAF we we've been updating all the apps to use that new loop including hivemind. But along with that and hivemind we've been basically doing a bunch of refactorings of code there in order to improve, I guess both the codes, maintainability and also its performance. I think, hopefully, I don't I haven't checked in with the progress lately I can see the works going on but I don't know exactly how far we are. I'm hopeful that we can get something ready for release by my high fests for that too, but we'll see. Let's see what else. I mentioned a bunch of other things and passing like reputation tracker. Let's see anything else. I don't know Gandalf can you think of anything because I'm running out of list. Oh, I remembered something sorry light accounts. So the other thing we're still working on is light accounts. And we've got a prototype application now that will process basically this so it's a light accounts app is essentially another HAF app. And what it does is it scans the blockchain for changes to keys that people make. This is so the initially the first thing it's doing right now is processing all the operations that when people use to change their keys. So it creates a table of the state of people what people's current keys are. So this will will have that information HAF and it the but it's to manage more than just existing hive keys the idea is it creates a way to make a second layer keys. So somebody can basically register keys at the second layer. And once the keys are registered in the light accounts app, using the new call new basically custom JSON operations, then any app can know that those keys are tied to that account. So a second layer app can now these keys can be used across multiple second layer apps any any app that bothers to subscribe to the light accounts interface will know whose keys these those keys belong to for instance. And they can register the type of key so that they can say this key is only useful for these type of operations etc etc. So the light accounts work is in progress and right now like I said it's processing primarily the the built in hive, hive calls the hive operations and next will be adding support for the custom JSON operations. Another the light accounts project is pretty important, not only because it allows us to do this key registration but we're also building a lot of very basic technology for handling handling what I would call second layer transactions. So we've got first layer transactions where you can stick a bunch of operations into a transaction a hive transaction in the operations only succeed if all the operations exceed otherwise the transaction fails and other things happen. And we needed a similar mechanism at the second layer so as part of the light accounts work we're also building all the technology for processing second layer transactions. Other work is going on is we're doing some final cleanup work on the support for light nodes. So I hope we'll be able to release a version soon that has light node support. And this is basically light nodes just for the terminology I'm using basically means nodes that don't need to store all the blocks. So for instance can keep the last 1 million blocks or no blocks at all, even. And just this is mainly useful for I think this will primarily be useful for a lot of servers and things where people are space constrained don't want to store 500 gigabytes of data, just to have a hive node running there for their server. Obviously, people running witness nodes should tend to keep their, at least their main node with a full block log and have to deal with 500 gigs but you know, that's life. Let's see. So there's certainly more but like I said it's just too much for me to keep in my brain at one time so unless Gandalf could take anything else I should mention. I'll probably leave it at that.

@gtg

No, I think so. I don't think so. I mean, there's a lot going on but I guess more details about.

@blocktrades

Yeah, exactly. Details. Yeah, so it's better, better put into a post. Anybody had any questions about any things we're doing right now or.

@Arcange

One question regarding light notes. Will they be able to broadcast transactions or they're just for read, reading information.

@blocktrades

Oh yeah they can you mean you mean can someone can they process. You mean can they be API nodes is that what you're asking.

@Arcange

It's, I would not say a full API nodes but just using it to broadcast transaction.

@gtg

Yes, that's one of the.

@blocktrades

Yeah, yeah, they can, it's, I mean, they're, they can be used as full, they can be used as full API nodes. So you can literally run your API node and have it with no block log. The only thing would be a problem is looking up block data. But that's actually going away because in a HAF that's actually not a problem anymore either because nowadays we have HAF us or the block log data is distorted the HAF database not in the. So these, these can be essentially full, full API notes. I expect the API nodes will probably move to this model of running light notes for long.

@gtg

Yes, I think we should start expecting witnesses to provide seed nodes that can actually provide blocks. I mean that are seed nodes because everything else can be current nodes without block lock.

@blocktrades

Right. Exactly, exactly. So yeah, we, but we will obviously need to have somebody who steps up and stores the data. So witnesses seem like a good, good choice for that.

@gtg

Unless you want me to have the only version. I will happily step up.

@blocktrades

Then you can blackmail everyone later. Would you like a copy of the block log? How much you willing to pay.

@Arcange

And then I have another question regarding libraries. Currently, if we want to encrypt and decrypt memos. We have to use ideas because I believe that's only library. That's able to do it. We have to use another library and you just want to encrypt and decrypt memos you have to import. If not all at least parts of ideas. So will there be another way to do that with the new libraries.

@blocktrades

Yeah, so wax wax will support encryption. In fact, we've got a we've got it supports encryption right now. I'm kind of looking at the encryption the way it's been done now and I have some changes I want to make to it, mostly at the, the use at the interface level, not the, the actual code, the code that does the encryptions all fine. But I would like to change makes some changes too. And I've, you know, that's an active issue I think will probably be done in the next week, two weeks. But yeah, it does support encryption now for that matter, but not the way I want to do it.

@Arcange

And we need to be a separate model just for encryption because

@blocktrades

No, it's, I mean, there's just wax. It's part of the base wax library.

@Arcange

So we left to import the world works library.

@blocktrades

Yeah, yeah, that's that's for sure. There's somebody's asked I think Voltec was asking if we could make a trim down version of library. And we might, you know, we'll probably look into that at some point but it's not a trivial thing to undertake. Because part of the way wax works is we, we took a bunch of basically the core protocol of the C++ code and used it, basically wrote this interface code and proto buff language, and then we generate wasm code for that. So the nice thing about this, this technique is it means that the algorithms used by wax will always will always match the ones used internally by Hive D itself, but it does come at a cost of a kind of large wasm library that gets imported in. And maybe what we could look at at some point is, if we can trim down that that wasm library into different sections, different parts of features. So if somebody doesn't want all the features that are there. Maybe we could, you know, for instance, say they're not generating transactions so they don't need the code associate with transactions. Then we could maybe make that into a separate sublibrary or something like that but these are just my initial thoughts on the matter I have not looked into what it would take to do the trim down yet.

@Arcange

Okay, perfect. Looking to the future.

@blocktrades

Anyone else have any questions. Okay. Just how you want to take us out.

@howo

Yeah, I don't have anything else on my plate really. Thank you everyone and see you next month.

Sort:  

Use find and search "lightning." Voice to text can be funny like that sometimes.

Hahaha

Congratulations @howo! You received a personal badge!

HiveFest 9 Attendee

You can view your badges on your board and compare yourself to others in the Ranking