Generative Video in the Editing Suite Adobe integrates AI video generation into Premiere Pro

Published
Reading time
2 min read
A dynamic GIF featuring erupting volcanoes, a reindeer in the snow, animated fuzzy creatures, and a close-up of a human eye.

Adobe is putting a video generator directly into its popular video editing application.

What’s new: Adobe announced its Firefly Video Model, which will be available as a web service and integrated into the company’s Premiere Pro software later this year. The model takes around two minutes to generate video clips up to five seconds long from a text prompt or still image, and it can modify or extend existing videos. Prospective users can join a waitlist for access.

How it works: Adobe has yet to publish details about the model’s size, architecture, or training. It touts uses such as generating B-roll footage, creating scenes from individual frames, adding text and effects, animation, and video-to-video generation like extending existing clips by up to two seconds.

  • The company licensed the model’s training data specifically for that purpose, so the model’s output shouldn’t run afoul of copyright claims. This practice stands in stark contrast to video generators that were trained on data scraped from the web.
  • Adobe plans to integrate the model with Premiere Pro, enhancing its traditional video editing environment with generative capabilities. For instance, among the demo clips, one shows a real-world shot of a child looking into a magnifying glass immediately followed by a generated shot of the child’s view.

Behind the news: Adobe’s move into video generation builds on its Firefly image generator and reflects its broader strategy to integrate generative AI with creative tools. In April, Adobe announced that it would integrate multiple video generators with Premiere, including models from partners like OpenAI and Runway. Runway itself recently extended its own offering with video-to-video generation and an API.

Why it matters: Adobe is betting that AI-generated video will augment rather than replace professional filmmakers and editors. Putting a full-fledged generative model in a time-tested user interface for video editing promises to make video generation more useful as well as an integral part of the creative process. Moreover, Adobe’s use of licensed training data may attract videographers who are concerned about violating copyrights or supporting fellow artists.

We’re thinking: Video-to-video generation crossing from frontier capability to common feature. Firefly's (and Runway’s) ability to extend existing videos offers a glimpse.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox