Adobe says video generation is coming to Firefly this year
Users will get their first chance to try out Adobe's AI model for video generation in just a couple months.
Users will get their first chance to try out Adobe’s AI model for video generation in just a couple months. The company says features powered by Adobe’s Firefly Video model will become available before the end of 2024 on the Premiere Pro beta app and on a free website.
Adobe says three features – Generative Extend, Text to Video, and Image to Video – are currently in a private beta, but will be public soon.
Generative Extend, which lets you extend any input video by two seconds, will be embedded into the Premiere Pro beta app later this year. Firefly’s Text to Video and Image to Video models, which create five second videos from prompts or input images, will be available on Firefly’s dedicated website later this year as well. (The time limit may increase, Adobe noted.)
Adobe’s software has been a favorite among creatives for decades, but generative AI tools like these may upend the very industry the company serves, for better or worse. Firefly is Adobe’s answer to the recent wave of generative AI models, including OpenAI’s Sora and Runway’s Gen-3 Alpha. The tools have captivated audiences, making clips in minutes that would have taken hours for a human to create. However, these early attempts at tools are generally considered too unpredictable to use in professional settings.
But controllability is where Adobe thinks it can set itself apart. Adobe’s CTO of digital media, Ely Greenfield, tells TechCrunch there is a “huge appetite” for Firefly’s AI tools where they can complement or accelerate existing workflows.
For instance, Greenfield says Firefly’s generative fill feature, added to Adobe Photoshop last year, is “one of the most frequently used features we’ve introduced in the past decade.”
Article