I have dabbled in machine learning and know that getting a machine to decide anything boils down to statistical inference, which works well with defined parameters, but becomes unreliable when parameters are not easily quantifiable or have complex interactions. This is a liability for computer decision making because the process can easily overweight irrelevant parameters and result in decisions that are obviously stupid to a human.
That's the current state of affairs. It will change as software evolves.
Btw, some times AI is much better when things have complex interactions due to patterns that we do not perceive and therefore cannot anticipate. This extends to forecasting sport events and market movements by feeding all sort of seemingly trivial data sets, which can improve prediction accuracy.
The best reason to keep doing if-then-else bots is because it would take some time to learn doing things with AI variants.
Of course for simple tasks, AI-level sophistication may not be required at all.
your idea is very intelligent @machinelearning
vote done