Adobe just dropped new AI-powered video editing tools for Premiere and After Effects that promise to automate some of the most tedious parts of post-production work. The headline feature is Object Mask, which lets you hover over any person or object in your video and click to generate a tracking mask in seconds. I’ve spent countless hours in edit bays manually rotoscoping, so this AI-powered automation is near and dear to my heart.

According to Adobe, the AI processing runs on-device using its own models, which means faster performance and better data privacy than cloud-based alternatives. The company explicitly states it won’t use your footage to train future models, addressing a key concern for commercial editors working with proprietary content.

Adobe claims the updated Shape Mask tools now track objects 20 times faster than previous versions. For context, the company suggests a 30-second commercial spot with complex masking work that used to take two hours might now take six minutes of actual processing time.

Beyond masking, Adobe integrated Firefly Boards directly into Premiere’s workflow. You can now pull AI-generated assets from Adobe’s digital canvas straight into your timeline without the usual import/export dance. Adobe Stock integration means you can license and place stock footage without leaving the editing interface.

After Effects also got a few updates. SVG import support finally bridges the gap with Illustrator workflows. The new 3D parametric mesh tools (cubes, spheres, cylinders) let motion graphics artists build geometric objects directly in the compositor instead of modeling them elsewhere first. These updates reflect Adobe’s broader strategy of embedding AI throughout Creative Cloud while keeping processing local where possible.

The AI-powered video editing space is getting frothy with tools like Runway and Pika offering substantial generative capabilities. Adobe’s approach focuses on accelerating existing workflows rather than replacing them entirely (which makes their suite of tools much easier to integrate into existing organizations).

In my experience, this kind of production efficiency does not save time (although it certainly can). In practice, it supercharges creative exploration. When my production company put the first commercial digital tapeless post-production facility online in 1986, we thought it would be a huge time-saver. Nope. It was the beginning of an era of endless creativity. Experimentation expanded to fill the time allowed. Remember: creative projects are never finished… you’re just forced to ship them one second before the deadline.

Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it. This work was created with the assistance of various generative AI models.

About Shelly Palmer

Shelly Palmer is the Professor of Advanced Media in Residence at Syracuse University’s S.I. Newhouse School of Public Communications and CEO of The Palmer Group, a consulting practice that helps Fortune 500 companies with technology, media and marketing. Named LinkedIn’s “Top Voice in Technology,” he covers tech and business for Good Day New York, is a regular commentator on CNN and writes a popular daily business blog. He's a bestselling author, and the creator of the popular, free online course, Generative AI for Execs. Follow @shellypalmer or visit shellypalmer.com.

Tags

Categories

PreviousOpenAI's Hardware Gamble NextSiri Gets a Google Brain

Get Briefed Every Day!

Subscribe to my daily newsletter featuring current events and the top stories in AI, technology, media, and marketing.

Subscribe