You are viewing a single comment's thread from:

RE: LeoThread 2024-09-02 09:39

in LeoFinance8 months ago

This AI Learns Continuously From New Experiences—Without Forgetting Its Past

Algorithms like OpenAI's GPT-4 are like brains frozen in time. A new study shows how future AIs could learn continuously in response to a changing world.

Our brains are constantly learning. That new sandwich deli rocks. That gas station? Better avoid it in the future.

Memories like these physically rewire connections in the brain region that supports new learning. During sleep, the previous day’s memories are shuttled to other parts of the brain for long-term storage, freeing up brain cells for new experiences the next day. In other words, the brain can continuously soak up our everyday lives without losing access to memories of what came before.

#ai #technology #openai

Sort:  

AI, not so much. GPT-4 and other large language and multimodal models, which have taken the world by storm, are built using deep learning, a family of algorithms that loosely mimic the brain. The problem? “Deep learning systems with standard algorithms slowly lose the ability to learn,” Dr. Shibhansh Dohare at University of Alberta recently told Nature.

The reason for this is in how they’re set up and trained. Deep learning relies on multiple networks of artificial neurons that are connected to each other. Feeding data into the algorithms—say, reams of online resources like blogs, news articles, and YouTube and Reddit comments—changes the strength of these connections, so that the AI eventually “learns” patterns in the data and uses these patterns to churn out eloquent responses.

But these systems are basically brains frozen in time. Tackling a new task sometimes requires a whole new round of training and learning, which erases what came before and costs millions of dollars. For ChatGPT and other AI tools, this means they become increasingly outdated over time.

This week, Dohare and colleagues found a way to solve the problem. The key is to selectively reset some artificial neurons after a task, but without substantially changing the entire network—a bit like what happens in the brain as we sleep.

When tested with a continual visual learning task—say differentiating cats from houses or telling apart stop signs and school buses—deep learning algorithms equipped with selective resetting easily maintained high accuracy over 5,000 different tasks. Standard algorithms, in contrast, rapidly deteriorated, their success eventually dropping to about a coin-toss.

Called continual back propagation, the strategy is “among the first of a large and fast-growing set of methods” to deal with the continuous learning problem, wrote Drs. Clare Lyle and Razvan Pascanu at Google DeepMind, who were not involved in the study.