You are viewing a single comment's thread from:

RE: LeoThread 2024-08-20 11:40

in LeoFinance4 months ago

We can decompose the progress in the four years from GPT-2 to GPT-4 into three categories of scaleups:

  1. Compute: We’re using much bigger computers to train these models.

  2. Algorithmic efficiencies: There’s a continuous trend of algorithmic progress. Many of these act as “compute multipliers,” and we can put them on a unified scale of growing effective compute.

  3. ”Unhobbling” gains: By default, models learn a lot of amazing raw capabilities, but they are hobbled in all sorts of dumb ways, limiting their practical value. With simple algorithmic improvements like reinforcement learning from human feedback (RLHF), chain-of-thought (CoT), tools, and scaffolding, we can unlock significant latent capabilities.

#ai #technology