You are viewing a single comment's thread from:

RE: LeoThread 2024-11-12 01:30

"A lot of the knowledge we discovered for our large language model can actually be translated to smaller models," Wolf said. He explained that the firm trains them on "very specific data sets" that are "slightly simpler, with some form of adaptation that's tailored for this model."

Those adaptations include "very tiny, tiny neural nets that you put inside the small model," he said. "And you have an even smaller model that you add into it and that specializes," a process he likened to "putting a hat for a specific task that you're gonna do. I put my cooking hat on, and I'm a cook."