Bass Makes the Walls Shake 

Facebook Twitter Flickr YouTube E-mail RSS

Digital Audio versus MIDI

Published on November 6, 2014 by in Tutorial

Who, What, Where, When and Why? Oh yeah…and how?!

What is the difference between MIDI and digital audio? This is the first important thing there is to know before you begin producing music on your computer. What is audio? What is MIDI? What tools are used and what do producers use them for? How and where do people implement these tools? When would you use either of these things in the production process? Why would you use one over the other and more importantly, why would you need to know the difference? I’ll do my best to answer all these questions and more in this article, so if you want to make music on your computer, and you are just getting started…this was written specifically for YOU!

I feel extremely blessed in my current occupation because my job is to help people make music. Being a salesperson in the music retail industry may indicate that my primary role is to sell products, however I’ve found that my day to day operations truly involve educating people. I guide them on their path from passion, desire, and a need to create to the understanding technology that will help them follow their dreams. It’s fun and rewarding, but it isn’t easy. I deal with everything from live sound and lighting to keyboards and recording software. I help parents make decisions on their children’s music education and 5 minutes later help business owners and famous musicians grow their brand. Regardless of whether someone is a novice, hobbyist or professional, at the end of the day, everyone wants to take a crack at making or recording their own music. In my opinion, there is one concept that must be understood before working with a Digital Audio Workstation (DAW). That concept is the difference between digital audio and MIDI. I learned this several years ago while obtaining my degree in digital audio technology and I’m extremely detached from how foreign the concept can be when you first hear about it. Therefore, this article may have room for improvement, so if you have any questions after all is said and done…Google them! I’m writing this for fun, not to become an online Q&A guy! Anywho, that’s enough of that. I think it’s time we get down to business.

Why would you need to know what MIDI and audio are? Simple, these are the two primary things that you will deal with when creating music yet they are completely different. If you know what audio and MIDI can do, and how they differ, you have made it past “Level 1”. So let’s bring you up to speed.


I’m so tempted to begin with MIDI because it’s a trickier subject, but I’ll start with audio because most people already have a basic understanding of what it is. Sound waves exist as variations of pressure in a medium such as air. They are created by the vibration of an object, which causes the air surrounding it to vibrate. The vibrating air then causes the human eardrum to vibrate, which the brain interprets as sound. When something makes a sound, whether it be a voice, a drummer, a bird singing or even the noise a mechanic’s air gun makes when changing a tire, that sound can be captured or recorded. This is typically done by plugging a microphone into an audio interface. An audio interface is a device that goes between a microphone and computer, and if you are going to produce music, this is the second piece of equipment you should buy. (An Apple computer is the first thing you need!) An audio interface converts an analog signal (whatever sound is coming into the microphone) into a digital audio recording which shows the representation of that signal. This is called analog to digital (A/D) conversion, and the end result…well, you guessed it…DIGITAL AUDIO!!!

Once a sound source has been recorded and converted to digital audio, it is stored on your computers hard drive, therefore it can be played back through speakers or reference monitors (studio monitors are the 3rd item you need!) The sound that is reproduced will playback exactly the way it was recorded. If you record a voice memo on your phone, when you play it back..that’s your voice right? Great, if you understand that, you understand digital audio recording.

Here is a bastardized version of how someone explained digital recording on Wikipedia:

A digital recording is produced by converting the physical properties of the original sound into a sequence of numbers, which can then be stored and read back for reproduction. Normally, the sound is transduced (as by a microphone) to an analog signal in the same way as for analog recording, and then the analog signal is digitized, or converted to a digital signal, through an analog-to-digital converter and then recorded onto a digital storage medium such as a compact disc or hard disk.

Two prominent limitations in functionality are the bandwidth and the signal-to-noise ratio (S/N). The bandwidth of the digital system is determined, according to the Nyquist frequency, by the sample rate used. The S/N of a digital system is first limited by the bit depth of the digitization process, but the electronic implementation of the digital audio circuit introduces additional noise.


That was really well written in my opinion, but I digress. So…

Who would want to record audio? Singers, guitar players, and pretty much any musician that wants to capture a live performance, but those are only a few examples. When I refer to a live performance, that doesn’t mean at the venue or on stage necessarily. It simply means that it is played by a real person and the performance is fairly raw and unaltered. Once something has been recorded, you can’t change the way the audio source was recorded. To the contrary, you can change the way the recording sounds by adding effects or chopping the file up and rearranging it in your DAW. Also, it’s good to know the old studio engineer adage is “garbage in, garbage out.” meaning you can only get out what you put in. When recording audio, the better the signal is that you record, the higher quality of samples you use, the higher quality preamps and microphones you use to record, the better overall results you will get…because if you put garbage in…you’re gonna get garbage out. This is audio, not alchemy.

When would you use audio? I’d say there is two instances when audio is the way to go. The first example is if you need to record a performer/artist. The second is when you want to use prerecorded audio in your project. Wait…why would you use prerecorded audio? If you have to ask, please refer to this video on the Amen break. If you don’t have time to watch that, I can simplify by saying that most top 40, pop hits and underground music producers take snippets of prerecorded audio and loop them to make tracks. That may sound unoriginal or even appear as if people are cheating, but once again, I encourage you to watch the “Amen Break” video to see how many ways people can freak a 6 second drum loop before making any judgements!

While you can’t change what was recorded in an original audio file, you can cut it up, chop it, and add effects to it which alters the recording and makes it unique. You can take a single kick drum sample from a loop and use it in your project or add a delay and reverb effects to vocals. The possibilities are only limited by ones imagination.

What it looks like:
When audio files are recorded or imported into a DAW, they are represented by an image of a WAV file, like the one below.

The horizontal line that runs through the middle of the WAV file is called the zero crossing and represents your speaker when it is not moving. Squiggly lines above and below represent the speaker pushing or pulling. This back and forth motion creates vibration that travels through the air, into your ear drums and creates the sound you perceive.


  • Captures live performance
  • Reproduces sound accurately


  • Large file size
  • Original performance can only be changed by re-recording


Here is where things get confusing. Sometimes, I need to hear about things, read about them, and play around with them in real life before the concept sinks in. I feel like MIDI is one of those things for most people. MIDI is an acronym for Musical Instrument Digital Interface however, it’s more important to understand what it does and how to use it, than what it stands for.

MIDI transmits binary code. 1’s and 0’s. On or off. A MIDI controller might look like a digital piano, but the difference is that a MIDI controller doesn’t have any sounds inside of it, rather, it is not the tone generator or source of the audio signal. MIDI allows us to send information from one device to another device to produces an audible sound. A few examples would be using a MIDI track on a drum machine like the MPC2000 to a sound module, such as an Access Virus TI2. A more common example would be plugging a MIDI controller (check out the Akai MPK249) into your computers USB port and opening a software instrument or “soft synth” in your DAW. If you press a key, push a button or move a fader on a MIDI controller, you are sending a message to another device that includes information such as the key (note) being pressed, how hard it was pressed (velocity; harder is usually louder, softer is typically quieter) and how long the note was held (duration). To make sure you understand, I’d like you to think about a piano roll from the early 1900’s. It only contained information on the notes being played, but any player piano could play back those songs. Since all pianos sound different, the song will sound different, but the notes, rather the performance would remain the same across any piano being used. Relate that to MIDI and we can see how MIDI files only contain performance data. This gives us an incredible amount of versatility when creating music which we will discuss next.

Who would want to use MIDI? Mozart, Beethoven, Bach and any other songwriter and composer. If they had this technology a few hundred years ago, we would have a much larger collection of music from some of the greatest musicians to have ever walked this earth. But, we have the technology now, and if you’ve read this far, you’ll want to use it too!

Consider this, if you are only recording the performance data, that means the performance can be replayed when recalling any sample or patch. Oh snap! This article is supposed to be for beginners, yet I’ve dug myself into a little hole here, because this can be a difficult concept to grasp. Regardless, I’ll do my best!

Let’s start with a screenshot that shows how MIDI looks when it has been recorded into a DAW:

Notice how there isn’t any representation of an audio file but instead we see notes on what is referred to as a “piano roll?” Great, now you are beginning to understand a key factor in what makes MIDI and audio so different. In this format, displayed as MIDI notes, I can do what is referred to as non-destructive recording. I can add or delete notes using a pencil and/or eraser tool. I can fix timing issues (quantization) and the best aspect of all is that I can change the sound(s) being used to play back the performance. If I don’t like the way my kick drum sounds, I can swap it out. If I choose to copy the performance data and paste it to a new MIDI or software instrument track, I could use a piano or harpsichord soft synth to play the same sequence, and this would be incredibly easy to do in a matter of seconds! This is useful when selecting instruments for any work in progress. Let’s say you like the melody of a bassline you recorded via MIDI, but the actual bass sound isn’t quite right…thank God for MIDI because as you now know, you can use the same performance you just played to trigger sounds from another bass patch or sample. One last thing worthy of mention is that you can convert MIDI to audio (but not audio to MIDI) and apply effects, chop it up and rearrange it once converted, giving you the same options you have with any other audio or WAV file. That’s pretty F’in mind-blowing!


  • Small file size
  • Non destructive recording


  • Need the same soft synths to be loaded to truly recreate the desired performance
  • Hard to be universal

The takeaway

Hopefully that cleared things up for you, but if not, you should watch some videos on Youtube until it starts making sense. Better yet, download a program like Logic Pro so you can mess around with audio and MIDI for yourself. That’s really the best way to learn. So regardless of who you are or what your goal is, understanding the difference between audio and MIDI is a good first step towards getting started with and understanding music production.

That’s it for now, cheers!

Don’t forget to follow me on SoundCloud, and like my various social media avenues!

 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn

Why leaving headroom is essential before sending a track to a mastering facility/engineer

Published on October 15, 2014 by in Tutorial

Why you want to leave headroom for a mastering engineer:

I’ve recently received several unmastered tracks from producers for to be released on Savage Land and noticed something worthy of discussion. I figured I’d put up a post, so when I see it happen again, I can direct people to this post for more detailed information on what I need in order to send to tune to the mastering engineers and why I need it to be done a certain way.

First, let’s give some definitions for a few terms that will come up in this article:

  • dBFS = decibles relative to full scale digital audio
  • 0 dBFS = maximum possible level in digital audio before clipping occurs
  • Bits = dynamic range in digital audio
  • Clipping = exceeding the available headroom and flat-topping the waveform
  • DAW = Digital Audio Workstation
  • Headroom = amount of dB left before clipping

Now that we understand these terms, let’s talk about how to prepare a track for a mastering facility.

Leave some damn headroom!

The main thing to understand is that before you “bounce” the track, you must leave some headroom. When the highest peak is anywhere between -12 and -3 dBFS, the engineer will have enough headroom to properly master the tune. I prefer -6 dBFS (see images), but if you are very comfortable with your mix, -4 dBFS is fine. The project may sound quiet, but you if you need to hear it louder, you should turn up your speakers rather than the faders on your DAW.

This shows 6 dB of headroom.  Perfect!


This shows NO headroom at all.  What are we supposed to do with that?

This track is maxed out.. Nothing we can do with that!


You want your track to sound like other songs on the radio right? Want that great tonal balance, without any harsh, muddy or nasal sounds, and most of all, you want it to sound LOUD and full, right? Aside from noise reduction, harmonic excitement, stereo imaging and mastering reverb (rarely used), there are 3 main steps in the mastering process that are going to boost the tracks overall volume. But if you don’t have any headroom, there will be nothing the engineer can do because there isn’t any headroom left before clipping. The 3 main steps that will really effect the volume the most are equalization, multiband compression, and loudness maximization.

With respect to equalization

the goal is to create a good tonal balance for the overall track, making sure all frequencies are good with respect to one another. The issue is that if you boost any frequency range, it is going to make the track louder. If you have no headroom, that means boosting the eq is going to cause clipping!

Multiband compression…

allows the engineer to compress specific frequency ranges. Perhaps they need to tighten up the bottom end of a mix (0 – 120 Hz), or maybe they want to tighten up the mix in general and add warmth to the instruments and vocals (120 – 2kHz), or they may want to increase the clarity of the instruments (2 – 10 kHz), etc. If you need to add some make up gain to any particular frequency and have no headroom, once again, you’d be clipping the output meter. The engineer has already added a few dB from the EQing stage, thankfully you’ve given him a full 6 dB of headroom to work with!

Loudness maximization…

is the final step in the mastering process. It’s similar to limiting, but is slightly different (How? That’s a discussion for a different post). The only thing you need to know is that this is the tool that brings the overall volume of the track to the required output level. Let’s say I wanted to leave a fraction a dB of headroom in my final master (-0.1 dBFS is common), I can set my threshold so the signal never goes past that point. This takes all the remaining headroom out of the track, leaving you with a final master that is radio ready.


So when you are bouncing your project down and getting ready to send it off to someone else to master, here is a few things to keep in mind.

  • Leave 6 dB of headroom (Your master output fader should never go above -6 dBFS)
  • Bounce in 24 bit
  • Sample Rate should be at 44.1 KHz or higher
  • Export the track one bar before it starts (pre-roll) and leave space on the end for reverbs and delays have completely tapered off.
  • Bounce to either WAV or AIFF for the file format
  • Choose “interleaved stereo”
  • Don’t use normalization
  • Don’t use dithering or noise shaping
  • Check how your mix sounds in mono (most big club sound systems are wired that way)
  • If you have plug ins on your master output, bounce down a version without them and another one with them. If you have a limiter on the output channel, you will still want to disable that.
  • Don’t put fades in the beginning or end of the track, rather tell the engineer where you would like them to be.

Have fun bouncing your tracks and make it easy on your engineer by coming to the table with something they can work with!

 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn


I was recently given a pair of Munitio Nines to check out and I thought it’d be nice to post some opinions about the headphones and compare them to Shure’s SE215CL’s. When I saw the box, I realized the marketing tactic that was being used and why they where called “Nines.” The ear piece looks like a bullet shell and they call the interchangable sleeves hollowpoints! Genious.

Now let’s get away from the appearance and talk about functionality. The Nines come with a special carrying case and everything inside the box looks purely indestructable. There is also a set of plastic pieces inside the case that snap onto the bullet shell to fit snugly in ones ear. The speaker drivers are made with rare earth neodymium magnets which give you quick recovery and better dynamic range. Dynamic range is basically the difference between the quietest sound and the loudest sound, so it really helps the music become more dramatic. Additionally, with drivers like this, they actually have a break in period and start to sound even better after you use them for a while. They state this in the package, I didn’t know if I believed it so I verified that this was true from some audiophiles that are much smarter than me. I also found it to be true after breaking mine in, they just got louder and cleaner over time.

Practical use:

I use these on a daily basis with my iPhone 4s. Before that, I was using my Shure SE215’s. The Shure’s are designed to be used with In Ear Monitoring systems on stage, but I have really enjoyed them for listening to music and taking phone calls on my phone, as well as DJing. The Shure’s are priced at $100 while the Nines are going to set you back $169. The Munitio Nines deliver a much higher quality audio signal. The bass on these things is freaking incredible for the size of them. The Shure’s have pretty much no bass but I’m told, their next step up (SE315) has dual driver emulation, or something to that effect, which gives more bass but I haven’t tried those. For what I’m using these for, the Munitio’s are better. I DJ’d with them the other day, and they worked great, so you can use them for more than just listening to music and making phone calls. Also, the Shure’s won’t stay inside my phone for some reason. I don’t know if the size is slightly different or what, but sound cuts in and out, and when I’m going for a long drive with the Shure’s, I have to use Scotch tape to hold the plug firmly in the socket. The Munitio’s don’t have that issue.


The Nine’s only come with one set of plastic ear pieces. These are the doohickeys that hold the bullets in your ears. I’ve been moving and I’m naturally a little chaotic, so I’ve already lost these things! Maybe they’ll turn up later! One thing I like about the Shure’s is the way they wrap around the ear, but then again, they are supposed to be used on stage and designed to be invisible, and I think Munitio’s design will work for most people. But yeah…other than the fact that I’ve lost the piece that holds them in my ear better, these things are awesome. I’ll be using the Shure’s when I get a wireless monitoring system for Beatkillerz performances, but in my day to day operations, the Nines are my favorite. Can’t wait to check out some of the other products in their line!

Get your pair of Nines here:

“Bullet” points: (get it? Haha)

  • Frequency Response: 12Hz – 20,000Hz
  • Their trademarked “Silicone Hollow Points” have noise isolating technology that allows you listen to music at lower volume levels and the music sounds better. They are also very comfortable and several sizes are included.
  • The cable itself is made of Kevlar, the same material used for bullet proof vests! It’s nice because it doesn’t get tangled up and it just feel durable.
  • The 3.5mm stereo plug is 24K Gold plated which gives a better audio signal that other other materials and it plugs in to just about anything.
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn

In a Few Hours of Madness

Published on August 26, 2012 by in updates

The winds of change have ushered in a new era. Connections are more important than ever and this is the communication gateway to and from the mind of Havok Mega. Subscribe to the RSS feed and be on the look out for some podcasts and download right here and in iTunes!


 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn