You are viewing a single comment's thread from:

RE: LeoThread 2024-11-16 03:13

in LeoFinance3 months ago

1986
David Rumelhart, Geoffrey Hinton and Ronald Williams publish the seminal paper "Learning representations by back-propagating errors," in which they described the backpropagation algorithm.18 This method allows neural networks to adjust their internal weights by "back-propagating" the error through the network, improving the ability of multilayer networks to learn complex patterns. The backpropagation algorithm becomes a foundation for modern deep learning, sparking renewed interest in neural networks and overcoming some limitations highlighted in earlier AI research. This discovery builds on the 1969 work of Arthur Bryson and Yu-Chi Ho by applying the backpropagation algorithm specifically to neural networks, overcoming previous limitations in training multilayer networks.

This breakthrough makes artificial neural networks viable for practical applications and opened the door for the deep learning revolution of the 2000s and 2010s.