© 2014 Shaun Gallagher All Rights Reserved
This is a recent sound replacement I did for the World of Tanks - Endless War trailer.
All sound design has been custom created by myself.
Music has been pulled from the original trailer and lowered to highlight the sound design.
The full HD version can be downloaded here.
These are some recent product videos I did for my previous job. I was responsible for all formatting, editing, writing, VO direction, and sound design.
More of the videos can be seen at Wowzers’ website.
This commercial spot was a project done in college.
The music, voice over recording, sound effects recording, editing, mixing, etc., even down to the 2-beep, were done by myself.
If a student consecutively performs poorly in Wowzers, they’re sent to a remediation video to relearn the previous lesson personally by one of our writers.
The video re-teaches them in real-time while voiceover matches each on screen action.
The audio was originally recorded poorly, and was done before I came on board. Rather than re-record all 500+ videos, we’re opting for a still lengthy, but easier method of audio cleanup for each video.
This is a demo of some of the results I was able to obtain.
With each tier, the amount of time needed for cleanup lengthens. However, we decided to go with the last, “best” option, and hopefully you’ll see why.
Methods used for cleanup in each tier:
It’s recommended that high fidelity headphones or speakers are used to hear all differences.
Here’s some voice over progress in the Wowzers middle school product as of January 2014.
I wanted to highlight some of the more colorful and unique characters in the video for this batch of recordings.
More voiceover progress and highlights to come shortly!
Here’s a demo reel showcasing some sound design done for Wowzers.
In this release, voice over has been turned off.
It’s been quite some time since I’ve posted to this site.
A lot has been going on, and I’ve been quite busy!
Lessons will still continue, but the focus for the near future will be more of a showcase of my recent work.
At that, this is a song that I recorded, mixed, and mastered recently.
All instruments and composition are to my own credit.
In our previous lesson, we discussed frequency and what exactly we can hear.
We know that frequency was our rate of repetition, or cycles per second.
We also know that we measure frequency in hertz, or Hz.
In addition to this, on our graph:
we know that out “x” domain is time. Last lesson, I asked what our “y” domain is if our “x” is time.
This domain is called amplitude.
Amplitude is the recorded measurement of “how much power” our waveform is recorded at, or our magnitude of an oscillation.
In audio, the more power we put behind a frequency (or various frequencies) in turn affects its loudness. So the harder we push these air molecules with our sound source, the more force they are going to feel, and the larger our amplitude of our molecular wave will be. Simplified: the harder our molecules are pushed, the louder our sound will be.
So when speaking about audio, we can truncate amplitude’s defintion to say:
Amplitude is the recorded measurement of loudness.
Loudness. But how do we measure something that is quiet, loud, or in between?
Well, on our graph, it’s how high up or down our waveform is plotted.
However, if I say, “I went to a concert last night, and it was really loud,” would you be able to comprehend, and therefore imagine what I was hearing or how loud it was?
Well, sure, to some extent, but you base it on your own past references. So we now need to quantify our loudness. If we don’t, then the word is entirely subjective.
How do we do this? With the decibel, which is our next topic of discussion.
In the last lesson, we covered what audio is, and how we hear it.
In this lesson, let’s discuss what we hear!
When we hear sound, we are hearing the pressure differences within the air itself, as previously discussed.
These pressure differences occur at a certain rate, and are measured in the time domain.
Because they are cyclical, and do not just occur once, they are measured in a value that we call hertz.
Hertz is defined as cycles/second, and is commonly abbreviated as Hz.
Hertz is determined by the frequency of one cycle, or how many times one event occurs repeatedly. Though commonly applied to sound, hertz is a value that can describe anything with a consistent repeating event, and therefore is a pretty dynamic unit.
So when we discuss frequency, we are also discussing hertz, as the two are often synonymous. Frequency can also be abbreviated as “freq.”
Let’s find out how we calculate frequency:
Say we have a clock, and we have replaced 12 with a 0, and this is our only arm on the device. We also have a stop watch and a video camera to record our data. If we spin the clock arm while it is motionless (or at our 0 position), we could say our clock arm moves left to right, correct? Well how many times does the arm move left to right during a one second time span? Let’s take a tally of each time the arm again reaches the 0 position after it has gone a full 360 degrees, but before it repeats again. This is called our cycle. We count the number of times that our arm cycles through this repeated motion during a one second time span, and voila! We have our frequency!
So let’s say we spun our clock arm at a rate of 60 cycles in our one second span.
What is our frequency?
Well because we know frequency and hertz are synonymous, our frequency is 60.
60 what? 60 cycles per second, or hertz. Our arm spun around at 60Hz because we counted the number of cycles (number of times our arm passed 0) during a one second time span. Make sense? Awesome!
So now that we understand frequency, know that there is a range of frequencies that the human ear is refined to hearing. This means that at certain frequencies, our ears will not accurately vibrate to the same speed that air molecules push it, and therefore, we cannot hear it, nor can our brains make sense of it.
For humans, this range of audible frequencies extends from roughly 20Hz to 20kHz (20,000Hz). Some people can hear slightly higher, but rarely lower. As we age or listen to sounds continuously at loud volumes, our range of audible frequencies decreases starting in the higher end. What we lose in this range can never be replaced. However, we will discuss hearing loss in a later lesson.
In the audio community, we commonly refer to this range as the audible spectrum of human hearing. Sometimes, the range is said aloud as “twenty to twenty k,” or “twenty to twenty kilo hertz.”
Wow! That’s pretty incredible! Our ears can hear roughly 19,980 individual frequencies, which are all just differences in air pressure.
So now ask yourself this: why is it that when I blow a dog whistle, I can’t hear it but my dog can?
Well that’s because dogs have different range of frequencies of which they can hear.
Dog whistles work on a very simple principle. The frequency at which the dog whistle is tuned to is above our range of audible frequencies (above 20kHz). However, dogs can hear up to frequencies about twice that of us. So if we tune a dog whistle above our audible range, but within theirs, they hear it while we remain unaffected. Pretty cool trick.
Let’s go back to frequency before wrapping up.
What were to happen if we placed a red dot on the end of our clock arm from earlier, and moved this arm across a sheet of paper in a consistent motion (right to left) and speed for one second? Let’s also say that our red dot leaves behind a line. What would we see on this paper? Well, we would be plotting a graph.
This graph has a clear positive and negative value on each side of 0. It is also our only source for creating our frequency. This is what we call a sine wave. More specifically, because it is our only frequency stemming from a single source, it is also called a pure tone sine wave, as no other sources add or subtract from our waveform.
However, what if we did have more sources, and perhaps different frequencies occurring during the same time and plotted on the same graph?
If we took a snapshot in time, our graph may look a little more like what’s plotted on the bottom of this image:
As two frequencies are compiled, certain values within the waveform are added and/or subtracted from each other, which result in the bottom image if the two above it were summed. This then gives us a complex waveform, or a waveform with more than one frequency.
On a graph like this, our right to left motion of the paper becomes our time axis (along x). But what about our positive and negative values, our up and down axis, our “y?” This is amplitude, and the topic of our next lesson.
So the term “audio” tends to get thrown around a whole lot these days, but what does it actually mean?
Well, audio (by definition), is analogous to sound.
So then, what is sound?
Sound is what we hear. Sound is all around us. In fact, if you were to ever not hear sounds, you may go momentarily insane. This isn’t always the case, but I’ll explain why later in this post.
To explain the definition of sound would be to also explain how it works.
Sound is simply a pressure difference caused by vibrating molecules.
For our every day lives, the medium in which sound primarily travels is air, however it is not the only medium that sound is confined to.
Air you say? How do I hear the air? Wouldn’t everything just sound like wind if I were only hearing air?
Well, no. Let’s think about it like this:
-When a sound is made (ie: something vibrates), due to Newton’s laws and the conservation of energy, the surrounding air molecules also have to vibrate, at least for some period of time.
-This happens because the air we breathe is made up of numerous different gases, and therefore numerous different molecules within those gases.
-When the energy from let’s say a tuning fork is transfered to it’s surroundings and in part is also lost to heat energy, it affects and moves its nearest surroundings. This would be our air molecules.
-The molecules are moved or dispersed in waves. These waves are air pressure changes which occur at various different frequencies due to the attributes of the sound source, and in some specific cases, one frequency.
-When the bumping of all of these air molecules eventually reaches your ear, your brain then decodes these changes in pressure, and creates what we perceive as sound.
I will not explain how the ear or the brain does this, unless you’d like me to.
Here is a graphic that will hopefully help:
So how fast does all of this happen?
Well, sound is subject to the density of molecules through which it travels. The closer or more molecules, the faster they can bump into each other, and the faster sound travels.
On a normal atmospheric day, sound travels through air at roughly 344 meters/sec, or 1130 feet/sec.
Wow! That sounds fast! Well, sure it is. It’s so fast that we hear things like our hand claps and rock concerts without any problem.
But what if we extend the distance from our sound source?
Ever been to the Grand Canyon or stepped into a big room and yelled “Hello?”
You hear it back a moment later, don’t you?
Well this is actually the original sound traveling through the air, and then bouncing back and hitting your ears once again. Pretty cool, huh?
The point being is that sound takes time to travel, just as most things do.
However, in comparison, the speed of sound is nothing compared to the speed of other things out there. In fact, we have jets that can travel at 3, 4, and 5 times the speed of sound. And think about light! The speed of light is so fast, that we know of nothing that can exceed it.
So wrapping this lesson up, I told you that if you ever didn’t hear sound, you may go insane. Why, you might have asked?
There are various places around the world in which sound does not exist for specific purposes, primarily testing. These places are called anechoic chambers, and are specifically designed to reduce sound to the threshold of human hearing, or the quietist sound that our brains can decode, and our hears can hear.
When placed in one of these environments, the reduction of sound is so dramatically alarming over time, that many people unravel under the weirdness of a world without sound.
However, if you can sit there long enough, this is what you’d hear:
Firstly, you’d have to get over hearing the sound of your own breathing at a loudness you’ve never experienced.
Next, you’d relax and slowly start to hear the blood being pumped through your ears and head. It would be deafening, and sound like a river rushing by.
And finally, if you could bear it this long and ignore the sounds I’ve stated previously, you would hear the sound of air molecules tapping on your eardrum very, very quietly. It would sound like a faint hiss, which is actually your eardrum moving 1/10 the width of a hydrogen atom! Amazing.
That is the lowest level sound of which we can hear, and the threshold of human hearing.
In the next lesson, I will cover what exactly we can hear, now that we know how we can hear it.