AI, not so much. GPT-4 and other large language and multimodal models, which have taken the world by storm, are built using deep learning, a family of algorithms that loosely mimic the brain. The problem? “Deep learning systems with standard algorithms slowly lose the ability to learn,” Dr. Shibhansh Dohare at University of Alberta recently told Nature.
The reason for this is in how they’re set up and trained. Deep learning relies on multiple networks of artificial neurons that are connected to each other. Feeding data into the algorithms—say, reams of online resources like blogs, news articles, and YouTube and Reddit comments—changes the strength of these connections, so that the AI eventually “learns” patterns in the data and uses these patterns to churn out eloquent responses.