The performance of Hugging Face's LLaMA 1b model to 1 billion parameters this year is "equivalent, if not better than, the performance of a 10 billion parameters model of last year," he said. "So you have a 10 times smaller model that can reach roughly similar performance."
You are viewing a single comment's thread from: