Impressive Benchmark Performance
The host shares benchmark results showing the strong performance of these Liquid Foundation Models compared to other prominent language models like LLaMA and Chinchilla. The 1.3 billion parameter model outperforms LLaMA 3.2 on the MMLU Pro benchmark, while the 40 billion parameter "Mixture of Experts" model beats out even the larger 57 billion parameter Chinchilla model.