Michael: [00:00:00] I've got a great question here from Carl. He asks, I frequently use what many would consider too much EQ when tuning lav or electron mics for corporate jobs, including the much derided graphic EQ, and I'm self conscious about it. Sometimes prompting comments like, you might as well just turn it down, etc.
I'm fully aware of the gain loss and phase issues caused by aggressive EQ. The thing is though, it sounds really good. I get enough level, and my work generally gets lots of compliments from both clients and my peers. Why should I change my workflow? For what it's worth, I'm gradually moving away from graphic EQ and opting for broader parametric filters for tonal shaping, and sharp notches to deal with feedback issues.
But the philosophical question remains. Cheers, Carl. Thanks, Carl, for a wonderful question. I'm [00:01:00] excited to dive in here, and I want to uncover a little bit of the question behind the question here. So EQ is a useful tool, but can too much of a good thing hurt you? I think it's what's happening here. So are aggressive EQs harmful?
EQ moves hurting things, even if it makes things sound better to you and your clients. So to uncover this, the roadmap for today is I want us to all get a little bit more comfortable as a tool in and of itself and how we're viewing it within the context of optimizing how dialogue and clear corporate audio sounds.
We're going to be looking at different types of EQ and uncovering under the hood what is actually changing with phase like you talked about. And then I really want to use that data and put it in a proper order of operations. How would I place EQ and when to use it in exactly what order to optimize the tonal balance of the source itself and the system, making sure that it sounds good for the in room audience, any remote audiences, or [00:02:00] any recordings.
And my hope with all that is to make sure that EQ feels less like a black box and I want you to feel confident when you do need to use it judiciously and ultimately be able to use the right amount of EQ in the right place to maximize your results.
[00:02:16] An EQ Primer
---
Michael: Let's start with a bit of an EQ primer to make sure we're all on the same page. And, and Carl, I know you're a seasoned engineer, you know what an EQ is, but I just want to have some common language for all of us. Maybe if you're new to using this tool and want a better perspective of what's going on. So an equalizer makes frequency dependent level changes.
So within any given source or what we are hearing, we're hearing a bunch of frequencies at the same time, unless we're hearing a single sine wave. And the relative balance of all those frequencies against each other is what gives us timbre or tonality. So timbre, you've probably heard if you are a musician, is a term that describes the qualitative aspects of a sound.
So for instance, a [00:03:00] muted trumpet has a very bright, harsh, Tenny sound, those are adjectives that come to mind for me, versus a flugelhorn is a very warm, luscious, inviting sound. So that's the contrast of the spectrum there, so when something has a lot of high end energy, it can be very agitative, uh, and gets our attention.
Our human ears have evolved to be very sensitive there to make sense of what's going on in our surroundings. Versus something that might have some rolled off top end, feels more distant, far away, not as in your face is another word that comes on. So how we're describing that are all changes in our brain.
and tonality. And so, of course, when we're thinking about something sound distant, we're hearing parts of the room. It's not just that the EQ of it or the tonality of it that's giving us those clues, but the roll off in the top end is a big part of that clue.
Bottom line here, EQ helps us alter the tonality of a source to get it to a different tonal balance that might fit our [00:04:00] show. So us as mix engineers, we're responsible for looking at every single source, whether that's a 8 person panel with a bunch of a drum kit and able to balance it so it sounds good.
in and of itself. Again, we have to be thinking about all these downstream places that might be going. That could be a in ear mix, that could be a monitor wedge, could be the PA itself, it could be a broadcast feed. So we have to kind of separate this idea of thinking about, are we EQing something to get it sound good, at its source.
And then we might need to be using EQ other places to think about how are those destinations projecting what's going on. So I want to plant the seed early that we might have to divide up how we think about using EQ into these disparate tonal buckets before we start hacking away at something.
It's very easy when you hear something sounding off or not balanced, if you will, to start reaching for the EQ first, because that's right in front of us, it's at our consoles. But I just want to make sure we are thinking about the source first. So my [00:05:00] voice is going to sound different than Oprah's. Duh, we're different people.
I'm a male, she's a female. There are different tonal characteristics about our voice. But in general, we want them to feel that they belong with the same universe as she and I were having a conversation. So there are certain characteristics that are universal about the human voice that we want to get balanced.
there's dictation, there's formants, um, all these different ways we can get really nitty gritty about the human voice, but to my ear, they fall in
[00:05:27] Tonal Ranges of the Human Voice
---
Michael: specific parts of the frequency spectrum. Uh, so the total balance of the voice, if we're starting from lows, moving up to highs, um, below about, uh, 80 or 100 Hertz is this really just kind of chest or felt resonance.
And so you can use a high pass filter to balance that. So if I have a lavalier microphone on Morgan Freeman or James Earl Jones, it's a very rich, warm voice. And since that microphone is close to their chest, we might get a lot of extra resonance there. So I can use it high pass filter to roll some of that off.
I usually like using a second order filter or [00:06:00] a 12 DB per octave to have a more gentler slope than something that's more aggressive. I feel like it. ends up kneecapping it if you will. Moving on up, we can balance the chest with the bottom of the throat as things move up at around 100 or 200 hertz. So warmth still comes to my mind.
Thickness is another word. So if a voice sounds too thick or muddy, I'm usually reaching for 200. If it just sounds, if I want it to sound more warm and inviting, that's 100 hertz as well. Moving on up, we move up to 500 Hz, and this to me sounds boxy. This is what I'm almost always cutting in the human voice, in a lot of sources in general.
I think the universe just has too much 500 Hz for whatever reason. Maybe 400 as well. But balancing this boxiness in the human voice is where I go next. Moving on up to 1k, that is nasal in my mind. Uh, is just north of what's the middle of the middle of the mid range. So if we look at the frequency, that's exactly in the middle between 20 Hertz, the bottom of the human ear, [00:07:00] human range in the very top 20 K that's about 700 Hertz.
So one K is just above that, uh, again, close to 500. So that's the meat and heart of what we're listening to. It's not what we're most sensitive to, but it's right in the middle. So it's tempting to start cutting a lot of that out. Uh, but really paying attention here and getting it balanced is helpful.
Next in my mind is what's pinched, and that is 2k, so I've just doubled from 1k to 2k, and this is where our S's start creeping in. In human voice that could be as low as 2k, as high as 10k, but we can use a de esser to help manage some of that. In a pinch if you don't have it, I guess, hey, you see what it did there?
Pinched at 2k you can use a static EQ move to help but in general a dynamic tool is better at taming S's And since they only come on the words that have S's and T's in them. Moving on up is to 5k is abrasive and squinty, so if there's too much here I find myself literally wincing or squinting if that is happening.
So you can almost listen [00:08:00] to your body here and realize what's going on. Lastly is airy or fizzy and that is 10k on up. Some S's are up here. I, uh, there's a few, uh, women I end up working with a lot, um, in the corporate sector that have a lot of S's here at 10k. Uh, and it's weird hearing it that high compared to someone else that has it lower, but just to know it can span a very large range.
So we've unpacked the human voice in and of itself and then it's to be going into a microphone of sorts. There are four main types, a lectern microphone or a podium microphone, a lavalier which is usually clipped on, a headset microphone, or a handheld microphone. Uh, three of those are usually wireless.
The podium is usually wired, but there are wireless options out there. And that is responsible for picking up the human voice. Of course, we want to get it as close to the mouth as we can without getting a bunch of plosives or a huge amount of proximity effect. But that mic capsule is going to have a certain EQ [00:09:00] shape to it.
It's going to shape the tonality of that voice. So those are at least two variables to think about the actual tonality of someone's voice outside of it, uh, in a sound system, just what it looks like sitting across or what it sounds like sitting across from them, then also the mic capsule and the actual curve is going to be applying to the voice once it captures it.
So that's why. Different vocalists prefer a certain microphone. For instance, Chris Stapleton, when he sings live, uses the SEV7. It's a 100 microphone, but it sounds wonderful on his voice. They carry a bunch of them, and it gets the job done for how he wants his voice to translate. So similarly, with the human voice.
Oftentimes, you're not picking out specific capsules for pres specific presenters, but getting comfortable with what different capsules do, what they sound like, and how they pick up the voice is good to have in your back pocket.
[00:09:50] Types of EQ
---
Michael: Now that we're comfortable with how the human voice sounds as moves through a microphone, now we're actually thinking about the EQ circuit to balance the, uh, tonality of the voice. So, Carl, [00:10:00] men Carl mentioned earlier, uh, that he was using graphic EQ sometimes and parametric. So par, uh, graphic eq, the most common flavor is a 31 band.
So there's actually 31 little sliders that you can move up and down. Um, uh, a certain amount that are fixed in frequency. So there's 31 hertz, 40. 50, 63, moving all the way up to 16k. And they're spaced in third octaves, so if you think about from a C to a C on a piano, they jump up a third of that octave, another third, and then another third, and they have arrived at that other C.
So it's equal spacing, and it's actually logarithmic spacing since our human hearing is logarithmic, and we're able to grab that frequency and move it up and down. Um, and. This has been a common tool because you can actually just look at it and see that those sliders are a certain way and visualize the curve.
But I want to warn you that the curve may not always look exactly like how the sliders are. It's going to resemble it, but not be exact. [00:11:00] So on the other side, there's parametric. And para means beside or beyond. So this is a new functionality. for equalizers when that was introduced, and fully parametric means I can manipulate three things independently.
And that is Q or quality, that is how narrow or wide a filter is, the actual frequency itself, so if I can go all the way from 20 hertz all the way up to 20k, it means it's fully sweepable within that range, and then the actual gain value. So how much I can increase or attenuate the level with that. So most commonly on channel strips, or if you're just coming in on channel in your console, you're going to find a parametric, but graphic EQs are usually reserved for outputs or on buses.
I don't find myself using them very much because they're not as flexible, but they can be used if you need to stack it and use EQ to help shape things within the context of a total
[00:11:52] How Much EQ Is Too Much?
---
Michael: mix or an output, and we will get to that. Later.
Now that we have context for EQ, we'll ask the question, how much EQ [00:12:00] is too much? So a common equalizer range is plus or minus 18dB. If we know that plus 6dB means a doubling, we basically have 3 doublings, that's 18 divided by 3 is 6. So that gives us a 8x increase. or an 88 percent reduction if we did minus 18.
That would be at the center of that band of EQ. Now I want to present to you a recent use case for a system that I tuned at a church just south of where I live. We had an RCF hang of six speakers and I've used this speaker a whole lot. And one thing I know about it is that it has a big resonance between 75 and 90 Hertz. but you had six of them together.
And so I placed my microphones front to back. In this case, I had three of them in the, in the front, in the middle and the rear. And again, with that hang, I saw a giant resonance and I had to think to myself, well, how can I solve it? Well, I can use EQ. This was a very sharp, narrow, and huge resonance, so I had to [00:13:00] be very, very aggressive.
I also had to use some other low shaping EQ to bring up the level and a little bit up top to balance out where things are around 3k. So I used very aggressive EQ. And so someone just waltzing into the system might look and think like, wow, this, this person's hacking this system to death. What's the deal?
But actually before and after, if you actually look where it went, it went to a very good place. I got it to my target curve and it sounded great. So I gave myself permission to use EQ, but you may be asking, oh my word, those that really, those really sharp filters or a bunch of changes are you really ruining the phase?
You may be asking. In case you don't know, well, what in the world is phase? You might hear this thrown around a while, and it's often a common excuse, uh, for like, oh, oh my word, the phase. So if we understand it, phase helps us determine where a specific frequency is in its cycle. So, a phase graph shows us a [00:14:00] change in timing over frequency.
So frequency response is both magnitude response and phase response put together. We usually think frequency response and just look at magnitude, which is similar to what an EQ graph would show you, but a phase graph shows you timing over frequency. frequency. I cover this a lot more in depth in my course.
I have a few other, uh, examples on my YouTube channel, but in general, where we see the phase graph flat means there's not much change in timing, but as the slope increases, basically it gets steeper going either up or down. That means a higher rate of change in the timing versus the other frequencies. So all that being said, if I look at the phase trace of my speaker before, and I look at it after, there's really not that big of a change.
Of course there is a change, because we cannot get a change in EQ, uh, in that magnitude without getting a change in phase, they're linked at the hip. That's what minimum phase relationships are. So we're, when [00:15:00] we are using a minimum phase EQ, which is almost all EQs, uh, that we end up using professionally, unless we're using a linear phase EQ or a special FIR filter, and we can cover that later.
But if you're on any old console, You're using EQ and adjusting the level over frequency, which is EQ. You are also adjusting the phase response at that point as well.
And Carl, you had mentioned earlier that some engineers say, well, you might as well just turn it down. I guess they are somewhat correct, because if you take, let's say, four equally spaced bands of EQ with fairly wide cues and bring them all down, what you see is the phase response end up being exactly the same.
To get a large change in phase response, phase response, there has to be a relative difference between a part of the spectrum versus the other. So that's why if we have a high pass filter, it's definitely bringing down a part of the spectrum, aka the lows, it is cutting the lows, if you will. That is a big change in the frequency response in that area, so we're going [00:16:00] to get a big change in phase response.
So if we have all those four bands moving together, we're not going to see a big change in the phase response because they are all close relative to each other.
[00:16:10] The EQ Order of Operations - Tonal Balance Buckets
---
Michael: Now that we understand our goal with capturing good audio from the human voice, understanding it's passing through a microphone with a response in and of itself, knowing what EQ can do, I want to introduce you to my EQ order of operations. Basically giving you three tonal balance buckets. This is not the only way to do it, but this is my way.
I want to break it down for you to make sure you're using EQ in the appropriate place to solve the appropriate problem.
Let me break apart this problem of using EQ to solve different aspects of live audio that come up with into three separate components. So what you may think is a source problem, aka someone's lavalier having too much 200 Hertz and feeling really thick, could be a system issue. So I'm going to help you break apart that of like, is this a system problem or is it a source problem?[00:17:00]
And number two is we, may need to sacrifice the absolute in and of itself ness tonality, does this sound good on its own, to make it fit within a mix. This could be things going on at the same time, someone's voice working with the piano versus a bass guitar, or this could be multiple people talking going back and forth to make sure their relative balances of their tonality sound good.
This is the problem of why bedroom helix patches or guitarists, um, Or someone even making something on a keyboard don't really translate all that well live because they're making it in a vacuum. They're not hearing their playing, let alone the arrangement, within the context of a full group. So if you are developing sounds or making a palette to pull from in a live context as a musician, Always make sure to have it rehearsed within a full context and see the amount of tonal real estate you're taking up make sense for the parts that you are playing.
Again, this is obviously subject to arrangement, not just the sound itself, but they [00:18:00] are definitely related. So pay careful attention there.
Now let's move into these three specific tonal balance buckets. So the first one is the PA system. The second is the source itself. And the third is what I call the Room and Regeneration. So we're going to move in that order. And you may think it's weird to start with outputs before inputs, but even when I'm on a show setting up my system, I always worry about outputs and the systems first because it's much more forgivable if the band shows up early to be patching the stage versus putting out the PA system and they can't hear anything.
Among other reasons, but I always start with outputs first.
Again, number one, the PA system, we have two speakers on a stick just for simplicity's sake. Um, I want to start here because this is the lens in which everything is listened through. So like my glasses here, if I have mud on them, uh, I cannot see anything. And so I have to work with what's the ultimate thing that the majority of my audience is listening through.
And we'll talk about remote audiences in a little bit, uh, to be optimized first. So [00:19:00] again, just good practice to have your PA up and running. Uh, again, I have an entire workshop on this that talks about how to optimize basically two speakers on a stick and two subs. If you want to learn more about specific tuning.
Uh, so today. Today's goal is not to get you good at system tuning for a PA, but just to know that you are going to be using EQ to adjust the total balance there. So if you don't have a system processor or a separate device that's handling and dishing out all the signals, gains, polarity, everything, you're going to be doing this on the matrix output of your console.
So I'd be sending you an entire mix. to a matrix and each matrix then feeds a zone of speakers. And then you can use EQ on that to get to a specific target. So you're able to measure the speaker itself and then take multiple measures, measurements throughout the room. If you're new to this, where it would start is go directly in front of the main speaker.
Let's call it solo your right speaker, take a measurement in the front row in the middle of the audience and the back and look at it and see what's common, what can be changed. And then adjust the tonality with [00:20:00] EQ to get it to a good target. Uh, that's in my audio toolkit, so make sure and check that out, and I have the target curve I use in almost, or target trace I use in almost every gig.
You can get that, EQ your system to be nice and natural, uh, in a correspondence with that tonal balance curve, or tonal balance trace.
In short, what I'm saying is get your system tuned and balanced first, so that it sounds good. And then, we're going to move on to number two, which is the source itself. So here, let's talk about a presenter on a headset, since Carl was asking specifically about corporate gigs. So the presenter's voice has a tonal response.
We can't change that. We might be able to ask them to speak up if they're quiet. Then we have mic placement. Just want to make sure we get that right. Have it close, whether it's a headset, a lav, handheld, or podium. Or Lectern microphone, excuse me. The mic capsule has its own frequency response and polar pattern, so it's going to affect it.
Uh, and there's an incredibly RF transmission, the signal path, the preamp, blah, blah, blah, everything that gets it to your desk. And then we move to [00:21:00] the actual channel strip. EQ. So my usual starting settings for almost any dialogue based microphone, so any of those four categories, is a high pass filter, about 140, and then I use a cut in the mid band, about 400 or 500 hertz, usually about 6 or 8 dB, a fairly wide Q of 1, and that is a good starting point I've found for most sources.
I, of course, can tweak this and adjust it. You might think well like how can I make sure it sounds good in and of itself and isolate that variable and there are two Ways one is and I guess both require one requires a reference and run one requires a separate piece of software So if you have headphones you can PFL that channel and hear it after the EQ and then you can also have a recording I have a really great sounding piece of dialogue coming in on another channel, and you can have your A2, or maybe your presenter, talk, you can listen to that, and then you're going to listen to that recording of really great dialogue.
And even if they have voices that are miles [00:22:00] apart, you can at least know that the human voice sounds about like this here, And this at least gives you an anchor and a way to judge the human voice when it's coming into your lav. And most often I found, I find that my dialogue needs more mid range sucked out, um, in a, in a live environment.
And so I'm usually then fine tuning where are specific resonances in their voices. Um, am I hearing the S's? And I'll start to fine tune things, but I'm using the channel EQ specifically to make that source sound good. Again, the PA is off. I'm only hearing that source. So first, we've optimized the PA itself to have a balanced tonality.
I've optimized the source itself. And then now I'm going to move on to what I call the room and regeneration. Caveat EQ is actually the wrong tool for the job to solve this problem because once we have the the the lavalier Passing through or headset even passing through the PA We're getting bleed from the speakers back into the microphone and that's what we're calling regeneration It's bouncing [00:23:00] off the walls and coming back in So regeneration becomes feedback when it crosses the point where it makes the loop.
It is too high at a specific frequency and it feeds back, but by definition, if it's coming through another speaker, some percentage of the original source is going to come back into the microphone and it's going to take time to travel. So it's going to be coming in late and that late arrival. of a correlated source back with itself makes a cone filter.
And so, what changes the peaks and values in a cone filter is that travel path, but the longer path it is, the lower in frequency the cone filter starts, and the more havoc it's going to cause. All that being said, we perceive late energy differently depending on how late it arrives and what the frequency content is.
It's very easy to see that. Easy for us to discern discrete echoes with high frequency content. So that's why you clap, you hear the clack, clack, clack, clack, clack, bouncing around. But if you just made a noise, it's going to be harder to find that transient, that initial spike, if you [00:24:00] will. It's it's the very same reason why the generic impulse response or a delay tracker in SMAART has a hard time with subwoofers because it doesn't have that defined click to the front of high frequency content to latch onto.
So now that you've made sure that the system sounds good, you have the source itself, you're going to be listening in the room and be like, okay, does this lav sound okay with what I've done so far? Or again, whatever type of, uh, dialogue source that you have. So you might walk the room and listen and be like, okay, because of the combination of the reverb time in this And the amount of low mid bleed that's coming from the PA back into the loft, it's feeling a little bit muddy or boxy or warm and thinking, well, okay, is this a PA problem or is it a source problem?
I would say it's 75% a source problem and a little bit of pa because the PA always has to be listed within context of a room, so you might take a little bit of of lows or low mids out of the system if you just feel like overall [00:25:00] that amount of reverb time in the low mids makes things cloudy. But we're having some of it come back into the microphone, back in the room, back in the microphone.
So it's, it's, it's kind of a hard problem to untangle. So I would then maybe use a low shelf to go back to the source and bring out some of that warmth. Because again, we're having that addition of it back in the microphone from the PA and that cone filter. And you might think, okay, I've got the LAV sounding good, I've made those small tweaks to the PA, but now I need to make sure I have maximum gain before feedback.
And this is where Carl was talking about using small notch filters to deal with that. Where do you do it? I would send all of your dialogue microphones of like type to a group. So on my console file template, I have four of them. I have a lav group, I have a headset group, I have a lectern group, and I have a handheld group.
And hopefully I have all the same capsule types per four, uh, to make sure this is, uh, they will all match. But that is making sure that within that, microphone, specific polar pattern and frequency response. Can I get it wrangled [00:26:00] to make sure it sounds good? So I'm going to send it all there to that group.
It will not go to the stereo bus, but then those groups will go. And that's when I'll push the level on those individual groups and then apply any smaller, very tightly focused groups of EQ to make sure I can get more gain before feedback. AKA lessen that frequency that's coming back out of the speaker into the microphone, making that comb filter go up and cause feedback.
So I'll go through each, uh, capsule or microphone type and then do that. Can you do this with a graphic EQ? If that's all you got, absolutely. Just know that you are fixed. You have a fixed Q and fixed frequency, but you have variable gain amount. But I like using parametric because I can isolate that it is 367 Hz and it's very narrow so I can tighten it up, drop it, and I'm not wreaking as much havoc.
And why do it at the group? Because I do not want that LAW, if it's going to any other sort of destination, to be colored by the fact that I [00:27:00] need to manage it and make it sound better just for the room. So what I do then is I have All dialogue based microphones go to a dialogue group and then that is able to feed my program matrix or program bus, depending on which console I'm working on.
And I can apply any subgroup processing I want to do. And that's usually some denoising or uh uh, the DI dynamic noise suppression if I have that available. But any rate, I could also have that bus available to send to a propter operator if they just wanna be able to hear a dialogue and nothing else so they can track easily anyway.
So that's how I discriminate on whether. I'm applying EQ at the individual source to make it sound good in and of itself. And since I've started there, it sounds good on the program or record feed, and it sounds good in the room. And I'm using separate EQ downstream of it, separate for how it's going to the live stream, to make sure that I'm only managing that problem for that specific audience.
[00:27:55] Recap
---
Michael: All right, here's a recap of what we covered. An equalizer makes for frequency dependent [00:28:00] level changes. And why is that helpful? It can help us balance the tonality of our sources, AKA our microphones, a kick drum, whatever, and the destinations or speakers that are then projecting that sound. We have sound goes in a room.
It's, it's a system that's all related to each other. It's, it's from the microphone. It's going in from the voice. It's going out. It's reverberating around, coming back in the microphone. We aware of how. All these things play into each other, but by using the correct order of operations, starting with the PA, moving on to the source itself, separate from the PA, and then combining the two to think about the room and regeneration, we can use EQ in the correct order to make sure we're maximizing the tonality, intelligibility, and gain for feedback.
My name is Michael Curtis. Thank you so much for hanging out with me today. If you would like to ask a question, I got the link below or produced by mkc. com slash question. Thank you again, Carl for a great one. I hope this is wonderful and helpful [00:29:00] for you on your next corporate gig. Thanks for watching.