No worries on Steembots..I'm still thinking of trying something just as a fun project to get me back to coding a bit, it's been a while.
I've got some ideas on the #nameinlights challenge, it's one I'd love to accomplish.
Oooo a squirrel...
Lol
No worries on Steembots..I'm still thinking of trying something just as a fun project to get me back to coding a bit, it's been a while.
I've got some ideas on the #nameinlights challenge, it's one I'd love to accomplish.
Oooo a squirrel...
Lol
As for bots, I think that @cristi and @trogdor have been releasing some really quality code lately and their stuff, while mostly analytics based, is a solid basis for a quality bot.
For anyone else following this...
@sykochica is one of my closest friends on this platform. Very strange to meet someone who makes posts, that when you read them for the first time, you think to yourself, "Funny, those are exactly my words but I don't remember writing it."
I have a bot based on neural fingerprinting models. It uses NLP and content extraction and analysis to "map the mind" based on contextual frequency analysis and something we don't have a word for right now, but amounts to word2vec applied to mental context.
You see there is a theory about language acquisition which says that our use of the language is a fingerprint that is absolutely unique to us. The way our minds are constructed during the language acquisition phase of life is supposed to be completely unique to our experiences and influences. These SHOULD be completely unique. IN THEORY!
Anyways I put theory into practice and built a bot for steemit, which is supposed to be able to detect when a person is using a sockpuppet account. I thought it was working remarkably well. I managed to catch several actual sockpuppets. I was going to release it on the world. Then it let me know it had detected @sykochica
and had attributed her to me.
Well obviously that's incorrect! We have nothing in common!
Which merely means there is something fundamentally wrong with this theory. As it turns out the problem with the theory is that it doesn't weight commonality of recent experiences and their imprint on the mind properly. Particularly the parts of the mind responsible for language construction.
Despite us having literally nothing in common growing up. We had both spent a significant amount of time playing the same games such as WoW. We were also both brought up to speak our minds and we both recently went on several years long journey of self discovery and exploration albeit completely different journeys and for completely different reasons.
So what the bot was really detecting is people who have undergone a commonality of stressful experiences which had remolded their minds in such a way as to be reflected in the way they use the language.
Thus I had to scrap the bot.
On the brightside, I did meet someone who thinks the same way I do about most things and made a new friend in the process! Thus one day I may change it up to produce a social similarity index to help you discover people you have nothing in common with, but who are very likely to think the same way you do.
Ha, I fixed that typo too, tyty! That's what I get for steeming while tired! Lol
Thanks for the info! And sorry I broke the bot :p
I think it's be interesting to try to use the bot you talked about for a 'recommended follow/friend' algorithm or something. Either way it should be fun to mix in the psych/linguistic theory in with a bot...couple of my favorite things :)
Fascinating story. Would like to see it written up in more detail, with theory and code explained.