Thank for the comment. It makes me thinking about improving the algorithm :)
I m not sure I like the normalization though, users should be motivated to see their reputation go up, but in this case it can go down just by having more users.
I agree this is great (and motivating) to see the rising evolution of the reputation score. However, I really want that the two factors, engagement and authorship, contribute equally. Consequently, one is forced to normalize somehow. Maybe a good solution would be to take the total sum of the two indicators of all users, divide by two and normalize each indicator to this number. I will definitely try that tonight. Note that by virtue of the exponential decay, the score can actually go down no matter what.
To make it even more complete, you could use the UA score of the post rather than its number of votes/payout.
I do not like the UA score because it measures somehow the connection to the top witnesses, which is not appropriate for a community. Here, we have a community account (@steemstem), with a behavior driven by the rules behind the community, that I wanted to use as a seed for the metric.
- For the authorship indicator, I ignore the number of votes and the payout value and focus solely on the weight of the @steemstem vote.
- For the engagement indicator, I instead use the length of the comment as a seed for the score (which is questionable, but I haven't found any better option so far). Only comments to posts supported by the community account are entering the computation.
Note that by virtue of the exponential decay, the score can actually go down no matter what.
Yep but in this case it's a "deserved" decrease of the reputation IMO.
Maybe the number of users could also be a factor in your calculation.
I do not like the UA score because it measures somehow the connection to the top witnesses, which is not appropriate for a community.
I have voiced similar concerns to @scipio since it is a very centralized way to start regarding the concentration of VESTS in the hands of the few. However, the way I understand it, it was only used to initialize the algorithm, so the importance of top witnesses should decrease over time (TBC).
But right, if you already have your own community calculation, that works!
For the engagement indicator, I instead use the length of the comment as a seed for the score (which is questionable, but I haven't found any better option so far).
Among the many projects that I want to work on but can t possibly the find the time for it, I was thinking that developing a spam detector shouldn t be too hard using machine learning and SteemPlus to let users provide samples of spammy content.
The ML thing would be great. Now, I only have an antispam filter as a minimum length for a comment. This is not optimal but allow to get rid to a large amount of spammy comment.
Am still thinking about the normalization thing. @justtryme90 is right and we don't want this to be a competition. It is just a funny metric and that's it.
I ll keep the ML on my TODO list then ^^
A bit of competition is healthy IMO as long as it doesn t become too important.
Fully agreeing here :)