Having efficient method known in AI industry only should accelerate the use of existing resources and faster better models to come out from existing AI companies. It must be bullish news.
Market speculates that due to DeepSeek's efficient method where it made OpenAI level model which is being interpreted as AGI level and open source, can possibly result in market wide sell off of existing hardware used by AI research companies due to bankruptcies.
To be fair, these same companies can, should, must, and will use the efficient methods available to make faster better and efficient AI models next. The next version of Google research came up with Transformers 2.0 dubbed Titans architecture with unlimited context window, replicating how human brain and memory works. Just imagine what latest efficient methods will produce in next 3 months to 6 months with such rapid pace of AI research.
On surface it looks like nothing new is happening in OpenAI, Anthropic and others are withholding next level AI model versions and just making better distilled AI models, teaching smaller models with bigger models with richer smarter teacher models, which will result in even more better models coming out from these same AI research labs.
They will surprise the markets yet again time and again with new bar set on next new releases coming out from these companies/projects.
Sort: Trending
[-]
splinterboost (60) 2 days ago
[-]
terraboost (0)(1) 2 days ago
$0.00
Reveal Comment