Nvidia CEO Jensen Huang will have an opportunity on Wednesday to explain why AI will continue to need even more GPU capacity even after last year's massive build-out.
Recently, Huang has spoken about the "scaling law," an observation from OpenAI in 2020 that AI models get better the more data and compute are used when creating them.
Huang said that DeepSeek's R1 model points to a new wrinkle in the scaling law that Nvidia calls "Test Time Scaling." Huang has contended that the next major path to AI improvement is by applying more GPUs to the process of deploying AI, or inference. That allows chatbots to "reason," or generate a lot of data in the process of thinking through a problem.