@sauravrungta - Reading about DeepMind is exciting and at the same time scary. I know the visions of 'skynet' from the Terminator movies springing to mind is a bit far fetched. However, with AI advances, will come the inevitable question of who decides how the AI reacts to certain situations and whether a self sustaining AI would really care about the decisions 'humanity' makes?
Asimov's 3 prime rules of robotics did kind of create a framework (at least in the realm of sci-fi stories). I would be interested in knowing how that is planned to be implemented. The 'learning' neural networks could also, theoretically, learn bad things from humans?
Thanks for sharing this interesting info. Upvoted full.
Regards,
@vm2904
AI is the greatest hoax... get immortality and live under cyber threats 24/7
without establishing peace on earth, AI will be the end of humanity.
The vision of skynet type AI may not be too far fetched if you ask me. Experts are already predicting that AI will surpass human intelligence by 2045. So, it may not be too far away.
Then I wonder how there will be 'control' on AI from a human perspective.
Only time will tell.