Is the next frontier in generative AI transforming transformers?
Transformer architecture powers the most popular public and private AI models today. We wonder then — what’s next? Is this the architecture that will lead to better reasoning? What might come next after transformers? Today, to bake intelligence in, models need large volumes of data, GPU compute power and rare talent. This makes them generally costly to build and maintain.
AI deployment started small by making simple chatbots more intelligent. Now, startups and enterprises have figured out how to package intelligence in the form of copilots that augment human knowledge and skill. The next natural step is to package things like multi-step workflows, memory and personalization in the form of agents that can solve use cases in multiple functions including sales and engineering.
The expectation is that a simple prompt from a user will enable an agent to classify intent, break down the goal into multiple steps and complete the task, whether it includes internet searches, authentication into multiple tools or learning from past repeat behaviors.
These agents, when applied to consumer use cases, start giving us a sense of a future where everyone can have a personal Jarvis-like agent on their phones that understands them. Want to book a trip to Hawaii, order food from your favorite restaurant, or manage personal finances? The future of you and I being able to securely manage these tasks using personalized agents is possible, but, from a technological perspective, we are still far from that future.