Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Multichannel testing
#1
I've been evaluating processing/routing options for more-than-stereo mixing with MB-10 (demo mode.)  Much of this was started in 32C v9 in conjunction to atmos with the Dolby panner plugin and an external renderer, but I picked up where I left off now that there is an internal renderer at the ready.  I've done this sort of thing with many "stereo-only" workstations, just to see what's possible.

Some disclaimers - I'm aware that lots (maybe all) of this is specifically not supported.  Plugin failures, crashes and basic incompatibility should be expected, and is not a bad reflection on the team at Harrison.  They have specifically said this release is not intended for this type of functionality.

These proofs-of-concept fall into a few basic categories. 
Managing multichannel playback of related positional audio.
2-in multi-out signal processing.
Multi-in multi-out signal processing.
Plugins that need to be forced into multichannel modes.
Assistive plugins that load others internally.

Moving an object between two positions with additional changes in character *beyond position) is limited to characteristics additionally automatable - such as rolling a filter off while changing positions.  Others may involve two sets of panned objects receiving signals that are modified more deeply and simply cross-faded between the two sets of desired goals.  Routing a strip to another strip makes this possible as long as multiple new faders pickup the same signals and process them differently.  As far as I can tell, there is not a way for the source signal strip to get to the router source point without going through the fader.

Most multichannel effects plugins I've checked (reverbs, delays) want to determine their multichannel environment when they are added to the strip.  This obviously results in "stereo".  Some plugins will sense more output pins as they are added and adjust on the fly.  Others must be closed and re-opened.  In the worst cases the plugins must be de-activated before the pins are added, then re-activated.  Sometimes the plugin GUI must NOT be open at the time of the pin adjustments for them to stick.  Save your session regularly, MANY of these steps will crash Mixbus.  If you don't already, you may wish to enable all AU VST and VST3 for your efforts.  In most cases, plugins that are available in all three flavors will definitely not behave the same way under these situations.  Some plugins that state a certain track-width functionality may not be able to live up to that stated spec in MB 10. 

The good news here is that once you've managed to get these instances working, they only need the extra outputs assigned to additional strips to function rather normally.  Several of my tested reverbs can feed all 16 outputs in 9.1.6 mode.  I haven't tried more than 16.

So far I've not found anything as simple as multi-mono plugins (Pro Tools wording).  These are a good way to save on CPU work.

With CPU-intensive processes, sometimes it's beneficial to commit or freeze or local-bounce.  This is not part of the MB design, so I'll work to come up with solutions for this, perhaps doing preliminary work outside of MB-10 then importing processed items.  It's unknown (demo mode) if playback audio generated within plugins actually makes its way into the ADM export, even while playing properly in the session realtime.

For plugins that need more than 2 inputs, creating sidechain busses seems to work.  I've not worked with more than 8 sidechains per strip yet but haven't bumped into any limitations.  It's a shame to use an entire channel strip just as I/O for a plugin.  Generally these routes "stick" using the routing window, but on a few instances I've needed to make a change to sidechain source in the pin window first.  This functionality is needed to create the I/O setup for multichannel dynamics and similar track-management processes.  I've been able to get some of these to work natively, but the plugin is unaware of what's on what track, so some of the GUI labels are incorrect.  (L-C-R etc)  Chaining multichannel plugins this way seems natively impossible, or at least I have not discovered a way.  Sidechains can get 8 channels into a 7.1 comp, with 8 outputs, but another plugin afterwards only has access to the stereo pins from above.  The same hurdle seems to make it challenging to EQ the output of a non-stereo reverb without wrapping both into another host plugin.  You can eq the individual stereo strips receiving the outputs, but no way to link the multiple controls.

There are 2 assistive items that seem to work with some effort.  Blue Cat Patchwork and Plogue Bidule, both can host plugins and allow chaining and processing internally.  Both of these come in multiple flavors and many don't work or crash Mixbus.  For basic stereo work, both work easily.  I've succeeded with Patchwork Synth VST to get 8x16.  I've succeeded with Bidule 16x16 VST for stacking 7.1.2 processes together.  Patchwork is more intuitive and user-friendly, but Bidule has better flexibility when it comes to predicting the channel layout potential of a plugin before trying and failing.  Patchwork has an elegant way to expose plugins from inside back into Mixbus.

Inside these wrapped environments there are easier ways to design mini-plugin configs to suit a task, but it is best to close down the related GUIs if you're working with pin settings, as this can easily crash Mixbus.  Once things are operational, things seem to be stable and recall properly after saving.  It has been difficult to have the router window, the plugin host window, and the pin window opened simultaneously without crashing Mixbus.

It seems useful to have the "traditional" MB console support processing/summing/splitting processes upstream of object panning.  It is unfortunate that enabling Immersive panning defaults to ON for everything in the session, including the groups.  I'm sure for some this will be a useful default, but it makes this type of plugin work more challenging.  This upstream-downstream strategy also contributes to an already increasing CPU load.  Every strip hosting multichannel processes bypasses the EQ/Comp/Gate yet the DSP usage is still there.  Basic summing systems would be more lightweight.

So far I've not found a way (in the manual or by trial and error) to forecast if a plugin will allow extra input pins, extra instances, or extra output pins without just trying, and sometimes crashing out.

Aggregating multiple instances of these multichannel items would ideally be able to happen upstream of the groups, but lots of strips with processing at least can accomplish the task.

Lots of fun.

h
Reply
#2
Hmm.

There are 2 fundamental ways of mixing 'surround'. (OK, maybe 3 if you count ambisonics, but we aren't going there right now)

The traditional multichannel surround is channel-based: for example if you have a sound that you want to localize in the center-front-upper of the room (where there is no speaker), then perhaps you'd send a little bit of signal to the center channel, and a little bit of signal to the upper-front-left and upper-front right, with the hopeful result of a phantom sound in the upper-center.

The 'new' way is object-based: in this case you'd pan the signal to the center-front of the room, and this 'metadata' is saved along with the audio, in the file. During playback on surround speakers, the renderer now does the channel-based panning: given the same setup as above (channel based),

There is a temptation to put your channel-based mix directly into the atmos file that you are uploading to a streaming service. This is very common in post/film workflows... the different channel-mixes are called "beds" and it is very ingrained in their workflows to create these 'bed' mixes.

The problem is this:

With object-based mixing, if the user is listening binaurally (as most music will be), then the sound is rendered with the HRTF to "sound like" it's coming exactly from the center-front-upper(*) as it was encoded in the panning metadata. (*) most listeners won't *really* hear the sound from exactly that point, without a custom hrtf etc, but that's the intent.

But if you've distributed that sound into a channel-based 'bed' mix already, you have a problem. Upon rendering, you've got perhaps 3 objects (the center one, and the 2 uppers) that are intended to make a phantom center. But the HRTF is going to take those 3 sounds, apply a different HRTF to all 3, and you're going to get a 'blurry' image and probably a phase-y sound.

There are many, many details including the binaural mode, 'snap', etc etc which might affect things. "the devil is in the details"

But, given these issues, we determined that the best way for our users to get immersive data into the streaming service, is to provide direct 'object' mixing, and some stereo subgroups, and no "channel" mixing features at all.

I realize this is a major limitation for you, hodger, as we've prevented you from doing exactly what you need to do for your particular deliverables and workflow.

But the vast majority of our users just need to spread a few things around the room, and quickly export an adm to meet the expectations of their listeners on "immersive" setups.
Reply
#3
Hi Ben, You seem to miss the point 100%.

I did some research, and posted results, in the event that any of your users just want to use an atmos-capable reverb or multichannel compressor. These tools are neither music-or-post, simply tools that don't work easily in MB, and it seems I can save some work and troubleshooting for them. You'll note that I DO not and DID not propose localising mix elements into an atmos mix using beds as opposed to objects. It was about using multichannel PLUGINS in the context of an atmos mix, which you'll discover is quite standard for pro-level atmos music delivery. You may wish multichannel plugins did not exist, but sorry.

You may have a challenge convincing the LiquidSonics customers (plus many other vendors) who use reverbs for their atmos-music-mixes that they are idiots. You have the same challenge convincing me. Stereo reverbs in a multichannel setting is a problem solved decades ago, and nobody wants to go back there.

No need to "school" me on Atmos, I understand totally. No need to "school" me on mixing, I understand totally. I said up front that this post was NOT an attack on you or the team. Unfortunately, you seem to take issue, and your response is on a totally different topic than my thread.

Perhaps instead of attacking me, attack my research, and provide "better" ways to implement 9.1.6 reverbs, or compress foundational mix elements outside the stereo field. You don't have to take it from me, do the research and you'll find that none of those grammy-winning atmos mixers are leaving their pro-audio toolbox outside the control room door when they mix atmos. It's the completeness and maturity of the tools that provide great results. Localisation is just a part.
Reply
#4
(04-24-2024, 09:08 PM)hodger Wrote: No need to "school" me on Atmos, I understand totally.  

Perhaps instead of attacking me, attack my research, .......

Hi Hodger
 
First, thank you for your post, it gives us that is new to Dolby Atmos some insight into the topic.

So, to you answer to Ben.  Again, remember we are a lot that have less or no experience in Dolby Atmos. You may know all about it, but I do not. So, when Ben answers your post with the explanation he does, it's great for a lot of other readers in the forum.

Last, I can't see any attacks from Ben in his answer. I do see some thinking around why it's implemented in the way it is in Mixbus and problems that can occur.

Let the discussion going, so we, that needs it, can learn more.

Steinar :-)

Mixbus Pro 10.0, Kubuntu Linux 64 23.10, Stock Low latency kernel, KXstudio repos, i7-3720QM CPU@2.60GHz, 12 Gb RAM, nvidia GeForce GT 650M/PCIe/SSE2, X.org nouveau driver, Zoom L12 Digital mixer/Audio interface
Reply
#5
Thank you Hodger for your research and your attempt to make it work. I have to say your are hitting the most important point 100%. We have to be able to use multichanel reverbs and so on for music mixing and me too cannot think about a good reason to not implement that option for multichanel-busses when thinking about professional music production in Atmos. But as I read somewhere this function is planned, so I think it was just to much work to get done properly for this release as it will cause some issues we do not think about yet. I have no problem if that was the reason told why this function is not in. But in the comments from Ben it seams like it should be sold as a feature and therefore I can understand why you get a little bit insulted on that one (I too am). In a way Ben seams to tell people how you should do Atmos for music to people who do Atmos for music ever since. Which is quite an unfortunate audience.
At this point Atmos in Mixbus is for me just a little hobby-playground and nothing to do serious work with. That is unfortunate as the overall workflow is quite handy and simple, but we will need 2 major functions for serious day to day work: 1st: multichanel-busses and 2nd: Import of ADM-files (actually I did not check if that one is in yet, because of the 1st point) and then there are some wishes on my list which are just comfort-things, such as: switching between stereo and atmos should just need one click and so on.
@Ben: for me it is no problem that the atmos release is like it is, but please just take the input of professionals as a hint what these kind of people need for using our beloved mixbus for their day to day work. I (and I think Hodger too) do not want to insult you or attack you and the team, we just want to show what would make it a thing for using it in professional workflows (which are prooven over a long period of time now). If that is not intended that's another thing, just state it somewhere.
Best
Arne
2023 Mac mini m2pro with 32GB RAM with audient id44mk2
Reply
#6
It's impossible to solve every problem of every professional user, in every professional space.  That's why no single DAW does everything, for everybody, all the time.
  
Mixbus is explicitly a 'music creation' tool.  Music is almost exclusively distributed by streaming services (although personally I'm a vinyl nerd Smile  and immersive music mixes are almost exclusively experienced on headphones.

Mixbus v10 is extremely focused on meeting that need:  immersive music creation for people that will probably only hear it on headphones.  And yes there is an audible benefit: you get a lot more 'space' in an Atmos mix ... it's a way of avoiding the loudness wars and delivering a higher-fi mix.  (your mileage may vary depending on music style etc)

I'm not suggesting that you ignore your surround toolbox.  But that toolbox is already in your other DAW Smile  the Mixbus toolbox is a different toolbox that is very well suited to making an immsersive music mix for distribution on streaming services... even if you are not already an expert at all the details of surround mixing.  

-Ben
Reply
#7
So, help me out here.  

After hijacking a thread about creative use of routing and plugins, twice, do we have permission to carry on with the original topic?  

I will take no offense if you want me to take a hike, (just say so clearly) but its hard to determine the justification for prohibiting plugin threads when the team has traditionally been otherwise helpful.  I’ve not seen any other such threads historically receive so much push back from the higher ups as the past week, and specifically mine.

h
Reply
#8
Hei Ben, that is my whole point: it does NOT have everything you need for immersive music mixing as you like to tell, but it is okay for now. I will not use Atmos in mixbus although I like the approach and although it would be nice to do immersive out of my mixing and production session as it is intended. I hope we will be able to use 3D-Room-Fx sooinish and then the Atmos thing will be kind of a good addition and a real thing. Until then i will proceed ignoring the feature and stop giving hints on tools you will need if you have a major atmos release to do, as you clearly pointed out that this was not harrisons aim and that you do not like discussion on the atmos implementation anymore. My point is clear: For stereo mixbus is the best tool I have, for Atmos it is just a toy unfortunately.
By the way, yes probably headphones are the most important ways atmos gets listened too, but there are some good soundbars, too Wink
2023 Mac mini m2pro with 32GB RAM with audient id44mk2
Reply
#9
@hodger: yes, and I apologize
Reply
#10
I've started working my way through some of my available options for side-chain-able dynamics plugins. I'm assuming the internal compressors cannot receive external side-chain.

None of the UAD dynamics accept SC. Only two UADx units can side-chain (API 2500 and API vision channel.)
So far, 17 from Plugin Alliance accept side-chain, however all of the AU versions I've tested completely crash MB10 as soon as I try to add a side-chain PIN. MB10 just disappears. Switching to the VST3 version so far has worked, but I haven't made it completely through the list yet.

h
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)