OpenAI’s CPO Kevin Weil also confirmed this during an AMA and said he expects to converge both developments at some point in the future.
“It's not either or, it's both,” he replied when asked whether OpenAI would focus on scaling LLMs with more data or using a different approach, focusing on smaller but faster models, “better base models plus more strawberry scaling/inference time compute.”