You are viewing a single comment's thread from:

RE: LeoThread 2025-01-29 07:06

in LeoFinance2 months ago

Part 5/8:

Deep Seek's primary innovation appears to be its use of a technique known as "distillation." This process allows Deep Seek to mimic the outputs of existing large language models rather than requiring the extensive resources needed to train a model from scratch. An analogy likened Deep Seek's method to completing practice exams as opposed to memorizing an entire textbook, thereby requiring significantly less memory and computational power. This approach not only made training inexpensive but also facilitated lower operational costs compared to OpenAI’s offerings.

Disruption in the AI Landscape