AI Uses Titan Supercomputer to Create Deep Neural Nets in Less Than a Day

in #deeplearn7 years ago (edited)

Just when you though some jobs couldn't be automated..

Two posts for you guys to read:
https://singularityhub.com/2018/01/03/ai-uses-titan-supercomputer-to-create-deep-neural-nets-in-less-than-a-day/

and
https://singularityhub.com/2017/05/31/googles-ai-building-ai-is-a-step-toward-self-improving-ai/#sm.0017ir491trmdbj10ld147sxwch5g

In summary

The first big boost this year came from Google. The tech giant announced it was developing automated machine learning (AutoML), writing algorithms that can do some of the heavy lifting by identifying the right neural networks for a specific job. Now researchers at the Department of Energy’s Oak Ridge National Laboratory (ORNL), using the most powerful supercomputer in the US, have developed an AI system that can generate neural networks as good if not better than any developed by a human in less than a day.

It can take months for the brainiest, best-paid data scientists to develop deep learning software, which sends data through a complex web of mathematical algorithms. The system is modeled after the human brain and known as an artificial neural network. Even Google’s AutoML took weeks to design a superior image recognition system, one of the more standard operations for AI systems today.

Reaching the technological singularity is almost certainly going to involve AI that is able to improve itself. Google may have now taken a small step along this path by creating AI that can build AI.

Speaking at the company’s annual I/O developer conference, CEO Sundar Pichai announced a project called AutoML that can automate one of the hardest parts of designing deep learning software: choosing the right architecture for a neural network.

The Google researchers created a machine learning system that used reinforcement learning—the trial and error approach at the heart of many of Google’s most notable AI exploits—to figure out the best architectures to solve language and image recognition tasks.

Not only did the results rival or beat the performance of the best human-designed architectures, but the system made some unconventional choices that researchers had previously considered inappropriate for those kinds of tasks.