DaVinci Resolve 20 brings ML features that DAWs should have added a long time ago!

Advertisement:

It’s interesting that DaVinci Resolve 20 just brought audio features that DAWs and audio applications should have already built by now! I’ve been advocating so much to consider more of offline processing workflows, use of machine learning models in interacting with the timelines, and making APIs that gives developers access to DAW elements; rather than spending all the research only in real-time fast code.

DaVinci Resolve 20 ML Features for Audio The user can wait. They now know they can wait. The UX from ChatGPT etc have set that expectation. It’s amazing how intuitive DaVinci Resolve’s new features are, with creative-friendly use of machine learning models instead of the “drag to generate more stuff” approach that Adobe has taken or “type to generate stuff” that Suno, Udio and every other text-to-music apps are doing. Here are some features in the new DaVinci Resolve that are absolutely amazing:

  1. Making versions out of existing track by just changing the length of the region. In DAWs in 2025, you can still just do pitch changes with this mode, even when the entire audio industry cribs about “oh, they changed the film timeline and now we have to re-edit music again”. How is this age old problem not attempted to be solved smartly?
  2. Automatic mixing. In DAWs, you can use iZotope or similar plugins that have “mastering assistants” but you’re still limited in all aspects. You need to have plugins on relevant tracks, let those plugin instances communicate with each other to give a starting point but still NO access to automation lanes. Imo, usual audio plugins can NEVER solve this, at least not with the limited access they have from the host applications. It will have to be the DAW itself that brings the features. Steinberg's Nuendo 14 did bring a feature that does this, but the one in DaVinci Resolve 20 can detect the type of audio in various tracks (voice, music, sound effects), colour code them AND then mix them for tartet platforms! What?? 🤯
  3. Voice Models. What’s the most intuitive way to do voice overs? Record roughly and then polish them. Well, this is now possible in just a few clicks.

So here’s what I think every creative-product maker should explore:

  1. ML engineers gotta talk to creatives. OR creatives gotta learn to develop and just build it for themselves. There’s no other approach that will make good products, unless of course, by chance!
  2. DAWs gotta make their APIs public, allowing developers to not just make plugins but also tools that can interact with the timeline. REAPER has it already, Avid Pro Tools SDK is enabling this too, and I have a strong hunch that the lua scripting in Halion means such options might show up in Cubase / Nuendo as well.
  3. Explore non real time processing. Use ARA if that helps, but know that as long as the processing does a better job, users are okay with waiting for the results.
  4. Process in the cloud. Pretty sure users won’t mind audio being processed on the server and rendered on the timeline. Especially when upload/download speeds are faster than processing time of their systems.
  5. New startup founders gotta stop playing VC game, and start making products that will matter.

What do you think of this new update? Let me know in the comments or using the contact form. Happy to discuss more ideas.