Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Curious About MIDI?
#1
Hey everybody,

Please pardon my ignorance, but I keep hearing about how much better other DAWs are at MIDI than MixBus. Here's my question: What kind of MIDI editing are people doing that requires such "advanced" editing tools?

This is my MIDI background and experience: I learned about MIDI from Roland/Kurzweil manuals when I was a freshman in high school in 1988. My first sequencer was a Roland MC-500 and the subsequent MC-500 MKII, both of which I used for years and years, and still have to this day. I'd program drums beats on the rhythm track, and I'd record all the MIDI parts from my keyboard and play them back into a sound module. The only editing I ever did was cut, copy, paste, delete, quantization, velocity editing, CC Volume, Pan, Sustain, and Program changes and maybe occasional transposing and very minor step editing to correct one or two bad notes. I could operate the MC-500 blindfolded and I still have the muscle memory of the keystrokes on it. Beyond that basic editing, I've never had to edit MIDI extensively. I'd just play it the way I wanted it. Even to this day, these are the same MIDI editing tools I've ever actually needed in Sonar, Studio One and Ableton. When it comes to MIDI, my MC-500 could still totally get the job done today. Upon reviewing the MIDI section in the MixBus manual, it seems to be exactly what I'd need. What scenarios am I missing which require the more "advanced" MIDI editing tools I hear about on all the other DAWs?

Thanks,

Donny
Windows 10 64, HP Z-220 Workstation, I7 3770 16 GB RAM, RME Multiface 2, PCIe
Mac OS Sierra, 2012 Mac Mini, i5 16 GB RAM, Behringer XR18
Mixbus 32C 6.2.26
Harrison MixBus V5.2
Presonus Studio One 5
Statesboro, GA, USA
Reply
#2
I did about 6 months on the MC-500, for a project, a long time ago. Used to have two MC-202s also, for a longer time. Those were the days. While the type of localized MIDI editing is still around, what was included these days is more of a macro facility to copy and move things around, to structure and to edit at a higher level.

As I've mentioned many times I create using Bitwig then I export tracks to be mixed in Mixbus32C. Bitwig IMHO is lousy at mixing and the sound is nowhere near what Mixbus32C can achieve right from the start. After working at the creation level for quite some time and having heard the material a lot, I export the tracks into Mixbus32C and immediately the difference jumps to the ears. I do not use MIDI at all in Mixbus32c.

As for editing, here are some examples of higher level MIDI editing.

First, ideas are recorded into clips. These can be MIDI and audio. The clips belongs to scenes. The ideas are actually scenes made out of clips and those scenes can be played in any order. Scenes can be renamed.

   

Clips can be superimposed making it possible to immediately see where each note corresponds. One clip can be locked eg. made non-editable. All kinds of modulation can also be edited. MIDI clips can be superimposed to audio clips in order to precisely match MIDI/audio events.

   

After a while playing around with the ideas the scenes can be recorded in real-time as they are played into an arranger which looks more like a regular arrangement of tracks.

   

Then the editing can continue into the arranger. What's exported to Mixbus32C are the tracks from the arranger.

   
Reply
#3
I could write a novel. But, the general understanding has to have a baseline of understanding WHY anyone uses MIDI...like the "audio editing" for a pod cast is completely different than the "audio editing" for a surround video which is completely different than the "audio editing" for a music production with competent performers which in turn is completely different than when your "job" is making amateur musicians sound professional.

I've used MIDI sequencing since...a similar time as you. Always software, as hardware offered NOTHING I needed in I'm going to say 89/90...what I used it for THEN was to synchronize keyboard sequenced to our multitrack (16 from memory) tape machine. MOTU's Performer was the ONLY option for this, as it allowed the tapping, thus recording of tempo changes in reference to incoming SMPTE timecode from the tape machine. The reason for this was that the band took most of the 16 tracks...between drums bass guitar and vocals...and if they needed anything more from me than a simple live keyboard part, it needed to be sequenced.

I like to say I've spent a decade a piece with the "three big ones". Performer through the 90s, moving to Logic (on Windows) until Apple bought them and canceled that...to Cubase/Nuendo in some form for another decade...only to come bac kto Logic at v10 for it's innovative Drummer Track and AUTO tempo mapping a few years back. So, my time with Logic was split into two eras, I guess...but, probably adds up to about the same as Cubendo and MOTU. This information is for background.

Tech fact: PPQN based MIDI sequencing is unable to reproduce human input. Ever. In any scenario. Never was. Nothing's changed since 1986...MOTU was I'm going to 980 PPQN then...which is what Logic is now...Cubase is still higher, but down from allowing ludicrously high ones that cause, ironically WAY MORE timing issues than the 980 rounding. And technically, I ran everything at double tempo, in order to double that. Most people who use midi consider it "close enough" which is what it is...that is NOt a 'fact'...it's simply their justification of using MIDI.

So, now...what do I use MIDI for in 2018?

-I use Logics Drummer Track in combination with it's auto tempo mapping of human audio to make drums for song demos. Technically, it's using "MIDI" under the hood, but really it's using a "super PPQN" stream to auto quantize (ie FOLLOW) 96khz audio. I know because I tapped it in their Environment to remap their ArticulationIDs to various third party's high hat articulations. I have to render the stream as audio as it plays to retain the feel. For a "demo" recording the MIDI likely is "close enough" for me...but, it's a matter of principle.

-I sequence string and (less often) horns. These require individual lines playing legato and yet group timing and intonation. They are not ever going to be played with 10 fingers (well). They are 10% musical arrangement and 90% programming/editing.

-When I'm in Logic or Cubase, I use their built in MIDI sample triggering to add room mic samples to home recorded drum tracks. This gets it's own thing because it doesn't technically REQUIRE MIDI...Slate and SPL--there are other solutions that trigger samples by the audio itself directly...but, since I'm working with room (distance) mics, tightness of timing is less critical for me. I'd rather NOT do this with MIDI, but it works and it WAS built in. Moving forward, I guess I'll buy Slate or something if I'm going to continue in Mixbus. There's NO reverb as good as real nice room mic samples. IME.

Ok...believe it or not, that actually wasn't the novel. I CAN lay out the actual technical bits, but I find they're less...understood, if for example someone doesn't understand how a string or horn section intonates or understanding of the level of interkitpiece timing that makes a "pocket"...or how most of the groove and "perceived realism" is in the hat more than the "louder" kit pieces...or have always recorded all music to click tracks so that you can use say a hardware MIDI sequencer...it becomes a technical list of "but why?" ...and "wow I just drag some Toontracks loops around and fix the goofs in my keyboard playing".
Win10pro(2004) : i7 8700/RX570 8gb/16gb/970evo : RME PCIe Multiface : Mixbus 32c 4.3 & 7.2
Other DAWs: Logic 10.4 (MacBook) Cubase 10.5 (PC)
Music: https://jamielang.bandcamp.com
Reply
#4
I would also refer you to discussions at somewhere like VIControl....where people have Macs running Cubase or Logic and 2+ Windows tower “slaves”....

I think a lot of the understanding youre looking for involves how instruemnts changed when they moved to software 20 years ago. A software VI isnt remotely like a softare version of old hardware modules. Sound designers have worked around MIDI’s shortcomings by coming up with clever uses for various CC and “out of range” note control (keyswitching) to make elaborate instruments where the “editing” is 80%+ of how the end sound is created.

I personally hate doing strings....because of that....but, I can just do what I did in the days of Kurzweils, which was—play a “pad” of strings....then come back and add violin and cellos counterpoint over as needed.

Heres one of the oldest....something i bought in 2000’ish for my Gigastudio PC....and not long after this more self contained plug in instrument version.

https://youtu.be/A4zwfY72ILA

If you google for things like “VST Expression Cubase” you can geek out on the programming needed by instruments from this century. I should point out that it pained me....i hated that hardware stopped, but it did. I even bought a Kronos on release becuase it was the ONLY hardware to embrace the tech that killed hardware use in the studio, apparently forever.

And keep in mind, i make nothing remotely “electronica” sounding, so i think there would be a whole OTHER set of features involved with that. I would hope others could fill in those blanks.

Also note....score editing came in the middle 90s to midi sequencers. My having the ability to select multiple string lines and have them display together on a grand staff makes arranging way easier. Im no where near a “great reader”...but, its much more intuitur than seen a bunch of colored blocks to figure out which is the F# I hear that needs to be changed....
Win10pro(2004) : i7 8700/RX570 8gb/16gb/970evo : RME PCIe Multiface : Mixbus 32c 4.3 & 7.2
Other DAWs: Logic 10.4 (MacBook) Cubase 10.5 (PC)
Music: https://jamielang.bandcamp.com
Reply
#5
There's also the kind of MIDI processing done by applications like Keylit and Usine, and the upcoming Architect by Loomer (that also runs as a VST). For instance, Usine Hollyhock, developed by French bassist Olivier Sens (Stéphane Grappelli, Michel Portal and many others) allows for creating MIDI processing that extends to video and controlling DMX lights on a stage or installation. Do you want a piano roll dedicated for a MIDI processing device ? A patch in Using Hollyhock can consists of music, video and lights. These MIDI environments also come with a physics engine that models physical interaction between objects.

This could all be academia but it's actually done with the user in mind since these are business ventures.

Using Hollyhock demo
https://www.youtube.com/watch?v=FdKidV1ho-E
Reply
#6
Thoughts Don? Have we sent you down a rabbit hole?
Win10pro(2004) : i7 8700/RX570 8gb/16gb/970evo : RME PCIe Multiface : Mixbus 32c 4.3 & 7.2
Other DAWs: Logic 10.4 (MacBook) Cubase 10.5 (PC)
Music: https://jamielang.bandcamp.com
Reply
#7
Haha! I've been doing lots of housework lately! But thank you guys so much for your input on this. I guess I'm just very "meat and potatoes" when it comes to MIDI. I'll let this sink in a little more.
Windows 10 64, HP Z-220 Workstation, I7 3770 16 GB RAM, RME Multiface 2, PCIe
Mac OS Sierra, 2012 Mac Mini, i5 16 GB RAM, Behringer XR18
Mixbus 32C 6.2.26
Harrison MixBus V5.2
Presonus Studio One 5
Statesboro, GA, USA
Reply
#8
It’s not so much midi editing that is more advanced. I think there is jut more control over what you can do with midi, midi routing and the combo of virtual and hardware instruments.




Sent from my iPad using Tapatalk
Reply
#9
Architect, from Loomer, now in public beta, is certainly a deep MIDI editing plug-in, with over 300 functions. Has a Labview feel to it, eg. visual programming. There's even a print module to print to a console from within your MIDI program.
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)