You are viewing a single comment's thread from:

RE: What is machine learning?

in #machinelearning8 years ago

As a confessed botter, I have a lot of incentive to run efficient bots. A casual reader may think that intelligent bots can be more efficient than dumb bots, but in practice, that's not the case. First, I believe that humans and bots are optimized for much different tasks. Humans are intuitive and have judgement. Their decision making is fuzzy, unreliable, and subject to whim, but can integrate much more diverse information than computers. Computers can make optimal decisions within a well defined parameter space, but can do so much faster and more reliably than humans. Importantly, computers will execute decisions without second guessing themselves.

An example would be that a human can judge when a dog is upset and likely to attack based on visual cues. A computer can judge when a dog is likely to attack based on quantitative factors like blood sugar levels or age. A computer will reliably fight or fly, but humans will unfortunately become paralyzed sometimes.

I have dabbled in machine learning and know that getting a machine to decide anything boils down to statistical inference, which works well with defined parameters, but becomes unreliable when parameters are not easily quantifiable or have complex interactions. This is a liability for computer decision making because the process can easily overweight irrelevant parameters and result in decisions that are obviously stupid to a human.

For these reasons I use dumb bots that act based on trivial measurements like time. I leave the complex decision making to myself and use bots for simple decisions. Mostly their value to me is speed and objectivity. What I mean is if I put the ultimate decision of execution in the hand of the bot, then I don't have to worry about my own last minute irrationality undercutting an otherwise optimized decision making strategy.

Sort:  

I have dabbled in machine learning and know that getting a machine to decide anything boils down to statistical inference, which works well with defined parameters, but becomes unreliable when parameters are not easily quantifiable or have complex interactions. This is a liability for computer decision making because the process can easily overweight irrelevant parameters and result in decisions that are obviously stupid to a human.

That's the current state of affairs. It will change as software evolves.

Btw, some times AI is much better when things have complex interactions due to patterns that we do not perceive and therefore cannot anticipate. This extends to forecasting sport events and market movements by feeding all sort of seemingly trivial data sets, which can improve prediction accuracy.

The best reason to keep doing if-then-else bots is because it would take some time to learn doing things with AI variants.

Of course for simple tasks, AI-level sophistication may not be required at all.

your idea is very intelligent @machinelearning

vote done

Hi @steemed, you may be interested in a new subset of AI I'm researching, Swarm AI. I think it's promising for machine intelligence because it goes beyond merely using statistical inference and neural nets. But what's interesting is that a Swarm AI network of humans could evolve into a network of bots that work on trivial tasks organized by a self-optimizing stigmergic algorithm, from which a general-purpose intelligence may emerge.

https://steemit.com/steemit/@miles2045/steemit-as-a-stigmergic-artificial-general-intelligence-with-human-values

How about bot ? why a bot like wang isn't blocked ?