Berklee Professor Steve MacLean Talks SONAR in the Classroom

Berklee-Online-LogoHello to all SONAR users!

In efforts to empower artists and music producers it is hard for me to imagine the huge numbers of students I’ve had the opportunity to work with over a 20 year span.  This happens in every format from private one-on-one or small group classes to many years in Berklee classrooms along with the Berklee online school (teaching production techniques in SONAR).  A range of numbers somewhere between 12,000 and 20,000 students is a not so calculated guess!  The truth is that I’m grateful for every one of them as each helped teach me to be a better coach and teacher along with all of the wonderful and amazing musical rewards when students apply what they have learned.  It gives me great satisfaction to know that these people will continue making progress and developing the music careers they wanted. 

One area of confusion that constantly comes up is this: 

I often notice that the technology almost forces people to work in ways that are very unnatural for what they are trying to do.  A great example of this is when the type of musical “style” the student is pursuing vigorously is ultimately made to sound lifeless and sterile.  You might ask, how can the musical style play such a huge role in the successful outcome of various production projects?

Seeing how there are so many types of commercial music that require a rigid fixed approach to tempo, it is clear why companies developing the software tools we record with tend to focus on making that approach faster and easier.  However, this presents an important distinction that must be made prior to the start of the recording session.  It may seem a simple call but it is hard to ignore the fact that the computer can do all kinds of wonderful creative tasks for us – as long as the music is playing along at the decided upon project tempo.  This means it was created from scratch by one of two methods: 

Either the music started out being electronic in nature (may imply various styles of music once again) or that live musicians being recorded played with some kind of click track. 

Certainly there are a good many musical styles where the tempo can remain fixed (or even follow along some tempo changes by creating a tempo map for performers to follow) so in these cases it may be best to play along with a click track to take advantage of the many processing functions that work with that.  However, there are also a good number of musical styles where that kind of rigid time locking will force the music to sound dull and lifeless.  And in other cases it makes no sense at all to try being locked into any tempo whatsoever!  The music will lose its ability to flow and breathe without a good amount of work creating a tempo map that demonstrates some natural flow of musical time.  Add to this the amount of time it will take to get a recorded performance that is glued to that tempo without sounding off or just plain bad and you can see why in these cases the technology is almost working against the artist’s vision being realized. 

What’s the problem?  Only that it’s a clear choice in these cases to use the recording device for capturing and editing the sound files with no regard for the sequencer’s click track at all.  (In case I should list some examples here . . .  how about recording a group of masterful jazz musicians – would we want to interfere with how they decide in the moment to “flow in time” by forcing them to play with a click track?  How about a thought provoking singer/songwriter doing a lovely ballad?  Could that song have deeper meaning and feeling if the artist is free to fit the time anyway that suits the moment?  Or perhaps a recording of a Japanese flute performance, etc, etc. . .  hope this gives some good suggestions for what I am focused on here)  The tempo-based tools are wonderful to use and learn when they are appropriate for the best outcome in terms of the expectation based on the “style” but in these other cases I’m referring to they are completely useless.  There is simply too much GOOD music where the only wise answer is found in letting the artists close their eyes and play from the heart at the tempo they feel from start to finish.  Now, once we’ve captured that magic, is there any real point to wasting countless hours going through those tracks to make them fit within a certain tempo..?  We “could” do this after all with the SONAR features called Audio Snap – but the gains are very small compared to the time spent creating such a detailed tempo map after the fact.  Is it really critical that measure 19 in the musical performance is also measure 19 in the sequencer?  Not at all.  It is only the music that matters here, period.

The key area of confusion when this comes up is in making the best use of the wonderful Snap To Grid functions available in SONAR.  In any of our rigid tempo-based commercial productions this absolutely will speed up our work a great deal and yet it’s of no help in making perfect and cleanly timed edits otherwise!  To deal with this, we must use our ears and experience by listening carefully to the timing to guide us through the best possible edits for the perfection we’re after.

When students begin to depend on Snap To Grid exclusively they often forget that the other option for audio editing even exists!   And that tends to get them off on the wrong foot for certain projects.  While a typical MIDI-based production in most commercial styles will easily fit to a time grid it goes without saying that snapping at mathematical divisions of the time makes it easy to sound very tight and professional.  However, what about the opposite approach in a session with no MIDI anywhere and a dozen mics capturing real performances in their own flow of time?  This is where you must turn “off” the Snap To Grid features entirely for your cleaning up and editing or you will get very frustrated indeed.  And at this point it’s going to come down to your skills as an audio editor that will create a successful outcome – or not – for the track or project.

Aside from creating a good musical time flow, there is only one basic rule that must be followed in detailed audio editing, whether it can be done “automatically” or by using the handy Zoom Tools on an as-needed basis.  Even if your audio editing application can locate zero crossings for you automatically, be sure to zoom into the waveform and see for yourself where the cuts will be made.  (Remember why – by cutting to these zero-crossings we eliminate any chance for audio clicks and pops if the waveforms we’ll connect together do not match up as we are editing.  These cuts are always at zero amplitude so they will of course match up nicely!)  In some cases for specific edits, the software may want to jump to a zero-crossing that makes the timing seem wrong and this will force you to look for another nearby zero-crossing edit point.  These choices are all going to have to be made by you – the artist, the composer, the producer. 

It takes time to develop the skills of well-trained ears for making the best editing choices in production.  With realistic planning for the style and intended outcome along with a clear understanding of “time” and it’s implications for the tools you’ll use during later production stages the best methods can utilized for creating the magic in your music that you want your audience to hear!       

It is with all of this in mind that I thought it would be important and fun to explore all of the various ways we might use our audio recordings in a production with a short list.  By understanding on at least a basic level how each of these differs from one another, it should be an easy choice when planning ahead for a certain style of production which approaches will prove to be the most useful in your work!

Audio tracks – sound recordings of any length can be edited freely for perfect performances and composite takes. 

Audio samples – sound recordings of any length can be triggered by MIDI note-on commands for unusual exact timing, pitch manipulations, synthesis parameters, etc.  The direct control possible is limited by the available parameters on the sample player itself.  

Rex files – sound recordings previously “sliced” into useful single elements.  Each slice can be tuned, transposed, moved/swapped in time (the RXP allows this) and otherwise modified as with many synthesis parameters.  The sound slices are played by MIDI note-on messages which allows for much creative freedom – all at the desired “project tempo”. 

Audio Snap – sound recordings which can contain markers in time and make use of the project tempo in various ways such as time control independent of pitch control.  Once the time markers are in place to define the tempo these files can also become a new quantize timing grid for use with other Audio Snap material, and even with MIDI too.

It goes without saying that any of these types of audio used in productions have the same options for processing and transforming the audio with plug-ins for EQ, dynamics control and effects processing.  They are all just audio signals passing through your console and ready for you to modify as you wish.  (Sometimes my students need a quick reminder of this fact too!) 

In case it might be helpful in wrapping up this topic here’s a quick video tutorial about audio editing in SONAR from the Berklee online course Producing Music with SONAR.  It takes some time but with the right tools and a little practice you’ll be making the perfect audio edits needed for your music!  I hope this has provided some insights that will help you make clear decisions in choosing the right directions for your production work based on the style and desired outcome.  These kinds of discussions are relevant for any of the music production applications on the market such as Pro Tools, Logic, etc.  Remember that great results don’t come only from being “Certified” on any given software application – this effort is fine and it may qualify you as an engineer but not necessarily as a good producer!  Your listening focus and artistic vision can be expanded to include much more than which button to push, which functions to choose, memorizing all of the shortcut commands, etc… you can easily learn any of these from the owners manual.  What I always attempt to share when helping young or budding artists is that there are specific tried-and-true steps in production that make for a successful outcome of the project.  My goals as a teacher are best met when I can focus students on these ideals of better listening and more carefully thought-out production choices every day.  

Thanks for stopping by for my thoughts on this topic!  Maybe I’ll see you in a class at some time in the future where I can look forward to hearing your work as we listen toward making the best possible production decisions for the style you are interested in.

All my best and happy music making!

Steve MacLean
Assistant Professor
Electronic Production and Design Department
Berklee College of Music

Steve MacLean is the course author for Berkleemusic’s online course Producing Music with SONAR

Leave a Reply

Your email address will not be published. Required fields are marked *