Monday, 8 September 2014

Helpful Hints for Self-Shooting Sound

Those of us who have been in this industry for a while remember when location shooting involved a two or three man crew, with plenty of equipment and the time to use it properly. Sadly, the world has changed and now this sort of thing is more often the domain of the lone researcher with a camcorder and a few sound bits.

I could rant about this and say it's a ridiculous thing to do, but sadly it seems to be the rule rather than the exception these days, so perhaps a few helpful hints might be useful? I wrote the following a while ago as part of a set of presentations for ITV production teams who might be self-shooting. Feel free to make use of any of it in your own shooting workflow.




Sound acquisition on location: A few tips for the best results


Tip #1: Hire an experienced sound recordist with appropriate equipment.

As a recordist, I have to suggest that first. It will cost you more to begin with, but your results will be immeasurably better from the start. And your editors will love you.

But you haven’t got the budget for that, otherwise you wouldn’t be reading this. You have to make do with what you have, and more often than not I suspect you’re on your own. So here’s a few suggestions to help both you and your workflow.

For this, I’ll assume you have a kit with at least a boom mic and pole, a radio mic and a set of headphones. Also that you have cabling to connect it all, and you’re connecting to the camera via two three-pin XLRs.

First thing: Get yourself familiar with how the bits connect together. Don’t wait until you’re about to enter battle!

Boom mic on pole, XLR cable to camera. Phantom power on if needed! Set input gain to “mic”.

Radio mic into transmitter, receiver into camera. Don’t mix the two up and wonder why it doesn’t work! Set input gain to “mic” or “line” depending on the output level of the receiver. Only one will look right; the other will either be massively loud or completely inaudible.

Headphones into headphone socket. And turn them up.

When you’ve done it a few times, it will become easier! The better you know your equipment, the better the results.

Hopefully you have now plugged it all up, and can hear something coming back on the headphones.


Second thing: If you do this, I GUARANTEE your sound will be improved.

You must LISTEN to what is coming in on your boom/radio and DECIDE if it’s good.

How do you know if it’s good? You need to get an idea of what “good” sounds like. Try recording yourself  with both the boom and the radio and then listening back. This is a good habit to develop at the start of each day to make sure your equipment is working; just because you can hear sound through your cans does NOT necessarily mean it’s being recorded ok! The boom should be no further than 18 inches from your mouth, and pointing at it. The radio should be clipped centre chest, no lower than nipple-height. Play both back , and listen how close they sound compared to the background noise. You should hear a cleaner, more direct sound on the radio mic, but the boom should sound more “natural”. THAT sound quality is what you’re aiming for when you start shooting, and you must always listen for any problems, both during recording and playback. Check particularly for a solid clean signal with no electrical noise or hums, which would normally indicate faulty hardware. There should be no excessive hiss or distortion if the recording levels are right. Which leads us to…


Third thing: Recording levels! There is an optimum level that you must record at; if you go below this you will get hiss or other noise, if you go above you will be in danger of distortion. Scales differ between cameras, but most have “0” at the top, with minus numbers below, sometimes with a mark at around “-20”. YOU DO NOT WANT TO GO ANYWHERE NEAR THE TOP OF THE SCALE! Aim for no more than around -10 for normal speech. If in doubt, err on the side of caution: a little noise is easier to fix than a distorted signal. To make the editor’s life easier, try and be as consistent as possible. Not always easy! Again, check the playback to make sure it sounds ok. If you hear anything wrong, you must mention it at the time when you’ve got a chance at fixing it. Don’t wait for the editor to find it!

Most camera have an auto level option. This seems like a useful thing, and indeed it will prevent you coming back with a distorted recording, but bear in mind that the auto level is not intelligent! If your interviewee stops talking, it will try and bring the background noise up to match his speech level, which will sound very unnatural.  This may however give you a useable sound which you might not get with manual level; it MAY for example be safer to use it if you’re on a top mic.

Nobody said sound was easy. So you have everything connected and working, and you’ve checked levels and playback. Now we move to where to stick things…


Fourth thing: Do you use the boom or the radio? You should have noticed how different they sound, so how so you decide which to use? Often the editor would like to have both available, so the safest choice is to put boom on one track and radio on the other. Seems like cheating, but it’s much safer for the editor to make the choice back in his nice calm suite where he can hear how it all fits together. Be VERY careful about committing to one or the other on location!

Booms sound more “natural” than radio mics; they match more what your ears hear. They MUST be close to the mouth (no more than 2 feet away) and pointing at it for best results. They are defeated by distance, loud background sound levels and by echoey/reverberant rooms. If outdoors they MUST have adequate wind protection, and they must be held carefully to prevent handling noise.

Radio mics sound more “focused” than booms; they reject background noise  and reverberation better, but can sound too clean on their own. They should ideally be mounted between mid-chest and neck. They are very vulnerable to wind noise outdoors, and to clothing noise at all times. Decide at the start if you really need to conceal them; this is a real black art and can be very hit and miss. If you have to do it; put them as near the surface as you can, and secure the clothing around them to prevent it moving. You CAN’T bury them under a coat and expect to get good results. Again, LISTEN very carefully to your results; if the mic sounds woolly and indistinct, or if it has severe clothing noise you MUST be in a position to hear it, and to try a different approach. Unfortunately there is no “one size fits all” approach to radio mic concealment.

Always ask yourself: Do I REALLY need to conceal them?.....


Fifth thing: LISTEN to your location!

When you go into the place where you’re planning to shoot, stop a moment and listen. What can you hear? Is it appropriate to what you will see? If not, can you control it or use it? A busy road next to your location will be noisy and you can’t stop it, but if you see it in shot your brain accepts it and it becomes more acceptable. If it’s still too loud when you listen through headphones then can you change the location? Remember as you listen the radio mic will probably sound better here.  (A good tip is to listen without looking at the lips of the talent; if you can still understand them this way then the listener who IS looking will have a reasonable chance.)

If you’re indoors, the sounds are more subtle. Listen for things like heating noise, fridges and fans. Turn them off if you can (remembering to turn them on again after!). Any noise which changes or goes across an edit will leap out later.

In either case, when you cut everything together, you’ll still have things which don’t quite sound right. Which takes us finally to…


Sixth thing: How to help your editor!

Good sound doesn’t just stop at recording the on-screen talent. If you’re recording in different locations at different times, your backgrounds will change whatever you do. To make your edits work, you need “clean” backgrounds to smooth over the edits. So, when you finish shooting in a location, take a few moments to record just the background sound. Even if it doesn’t sound like much, record 30 seconds of it with nobody speaking or moving around. The editor can then lay this over any edits which jump out at you because the background was a little different. Don’t forget to log these so they can be found easily!


All the above is just a start, but it’s a step in the right direction. It might sound a lot of hard work. Which indeed it is. But if you take the time to acquire good sound, and to understand why and how it’s done, it will improve your end product immeasurably.


Thursday, 6 March 2014

The Problems With R128

First off, a statement: I have no problem with the principle of mixing to a constant loudness, because it's how I was trained at Evesham 30 years ago and it's how I've mixed shows for 20 years. The idea is not new, but in recent years due to lack of training it's fallen out of practice and people have become slaves to the PPM without using their ears,

Second statement: I do not consider myself to be the greatest mixer in the world. I could be wrong about all this. But I DO mix 10+ hours of network TV every week, and I have lots of chances to measure and experiment. There are some major problems I have found, and nobody has yet been able to answer them; there are many "Heads of Technology" who are quoting me chapter and verse on R128, but few craft mixers who have actually done it.



I'll start on the small bits and then work up.



Firstly, why are we doing this? I thought the biggest complaints from viewers were about adverts and promos, yet these are not covered by R128! There WILL be a spec for them, but why are we going to all this trouble when all we needed to do was tame the commercials. Incidentally, the last year has seen a reduction in advert level to the point where there doesn’t appear to be a major discrepancy any more.



Secondly, the measuring algorithm. This whole thing is based around a metering system which is meant to tell you how loud something sounds, rather than how much it meters. Hang on, didn't we already have something that could do that? Like a pair of EARS? Wouldn't it have been a lot simpler to just train people to use the PPM and LISTEN, like I was taught? Then we could have kept the system people were familiar with. I even had an acronym for this. 


P.E.T.: PPM. Ears. Training.


But if you're going to replace humans with machines, then you will have made sure that your ear-measuring meter actually matches what a human ear hears? When we moved to our new home at MediaCity our studio gained a loudness meter, and I started measuring everything we did. I found that between shows, with a fixed monitor gain, speech which sounded constant to me was reading DIFFERENTLY on the loudness meter. Even within a run of the same show, there were inconsistencies. It seems that the faster the speech, the more it "fills up" the integration, even though it doesn't sound louder to the ear. I've seen errors of up to 2 LUs between shows; this is as much as the variance allowed on the DPP spec!

Now I'm aware that my opinion alone may be incorrect, so I've asked a few people, both sound and non-sound staff, what they thought, taking care to word the question in a way that doesn't influence them. They seem to agree with me. This is, of course, not a scientific study, but it does seem to confirm what I think I'm hearing. 

Maybe hearing is such a subjective thing that any algorithm to measure it should take into account a lot more than just the "area under the curve" of an integrator?




Thirdly, the selection of the "Integrated" value as the criterion for program acceptance. The idea that the viewer cares about the overall level of a program is, I think, flawed. What the listener cares about is having to turn their TV up or down, and the thing that causes this is dialogue variation, whether the wrong level or with too much dynamic range. The EBU say that this allows for greater dynamics within a show, but with no restrictions on short-term level variation it is possible to mix a program that has so much dynamic range it's unlistenable even though it hits the numbers. This was, of course, possible with the old system, but I thought the new one was meant to sort this out? I've taken show segments, made them unlistenable with massive level variance, and then submitted them for testing to QC. All of them passed! How can this possibly benefit the home viewer? Again, if the new system is no better, why not stick with the old one and train people to use it properly?




Fourthly and finally....

Following on from the above, there is a major problem with only specifying integrated as a delivery requirement. This is a major practical issue for me, and so far I have conflicting replies from anyone I've asked. Bear with me, this may take a while to explain.


First, a question, the answer of which is very important: 

"Across a network, should the average loudness of normal presenter speech be constant between shows, and is so what should it be?"



The official answer I had from the DPP is "Yes, -23LUFS", as is that of Hugh Robjohns in this month's Sound On Sound article on loudness. There is NO information on this in the delivery requirements.

To understand the problem, we must follow through the implications of a Yes or No answer to the above. We must also look at the basic components of a TV show mix, and how they interact.

A TV studio sound mix typically has three components: Speech, Music and Applause/FX. These are balanced to sound correct relative to each other, and the idea is to maintain an appropriate dynamic range for the home listener. 

The problem is very simple. For some shows, the speech is the loudest part of the show, for others the quietest, and for some it's in the middle. 


Examples:

Speech is loudest: Newsnight
Speech is in the middle: Countdown
Speech is quietest: Jeremy Kyle
.

I know these are on different networks, but the principle is valid regardless.




So,consider the YES case:

We wish speech to be constant between shows. The DPP recommend -23 LUFS, so that is what we do. 






This is easy; there is mostly speech and very little else, so that works giving us a final integrated value of -23 LUFS.




                                                                                                                                                                                                            This is also easy; there are quiet sections (the clock bed) and loud sections (the applause and music) around the speech which more or less cancel out to give us -23 Integrated again.





                                                                                                                                                                                                            This is a problem. The speech is the QUIETEST element, and there is much shouting, music and applause which are all louder. So if we put the speech at -23 LUFS then the final integrated reading will be well over -23 because of the louder bits. Typically the show sits at around -19 LUFS using this method.








RESULT FOR THE "YES" CASE: The viewer is happy, the mix sounds right but we cannot conform to the integrated DPP spec for R128. 





Now consider the NO case:


This is easier from our point of view. If we don't have to match speech levels, then we can keep our mixes the same, and simply offset the level to hit the DPP spec.


RESULT FOR THE "NO" CASE: The DPP spec has been satisfied, the mix sounds right but the viewer is still left with inconsistent dialogue levels and STILL has to turn their TV up or down between shows. Nothing has been gained.




There is a third case: We remove all dynamics from our audience shows so everything is at -23LUFS. 

RESULT FOR THE THIRD CASE: The DPP spec has been satisfied, the viewer has constant loudness, but the whole experience of the show is lost, and everything sounds like Radio 1. As a craft mixer, I don't consider this an acceptable solution.




Although the DPP recommend the "Yes" case, in practice the "No" case is what is happening. Mixers can set their speech as they wish to allow higher or lower levels, and it all seems fine if you consider a show in isolation and NOT as part of a greater network. We end up with what we had before: inconsistent levels between shows which defeats the entire object of R128. 


All of this results from the decision to use the "Integrated" value only, which as you see causes major problems. It's interesting that our European cousins have all picked different things to measure from the original EBU spec. The American networks seem to have realised the problems, and often have rules for dialogue levels as well as the overall show level.




This last question in particular is the one that I have never had properly answered,  Every time I raise this, I get chapter and verse of R128 quoted at me, but the questions are never answered and eventually everyone goes quiet. 

I am NOT trying to be awkward; I believe these are genuine concerns, and I want them to be answered in a satisfactory manner. The new spec is only acceptable if it works for ALL genres, and it clearly does not.

I believe we are throwning out a standard which had flaws but worked fairly well for 80 years, and we are replacing it with one that won't even solve the problems it was meant to.



Why did nobody ask the people at the sharp end?







Wednesday, 23 January 2013

Sorry about this but....

....this isn't REALLY a blog. At least not a real one that gets updated frequently with new and incisive articles. It's more of a quick stopping-off point where you can read a bit about who I am and what I do. I did intend to write more often, but as a senior sound supervisor for a major UK broadcaster there's little time for long speeches. At the moment, we've just moved into our new home at Media City in Salford, and there's a lot of new shiny toys just BEGGING to be played with....



But feel free to look around; you'll find some details about my life in TV, and some of the shows on which I've worked. If you want more up-to-the-minute news, the Twitter feed on each page is the place to find it.

Best wishes and thank you for visiting!

Jake



Saturday, 26 May 2012

The window on your world.....

Interfaces can be strange things. They sit between our hands and our products, and we use dozens of them every day without even thinking about it. Drive a car? The interface in the driving seat has, give or take a few details over the years, changed little from what it was at the start of the motor industry. Play an instrument? That interface could be hundreds of years old. Use a computer? The mouse is a relatively recent invention, but still about as common as it gets.

But that's all hardware. And the thing about hardware interfaces is that they're not easy to change, So when you learn to drive, you have to learn the standard control system. You might think that you'd be happier with the gearstick on the other side, but unless you go to a country which drives on the other side of the road that's not going to happen. Those of us who are left-handed have lived with interfaces designed by right-handers all our lives and there's not been much we can do about it. This isn't always a bad thing of course; imagine a world where every car had a unique control system. Part of the learning process is developing the "muscle memory" so you can operate the controls at a level where you don't have to think about it, and this can only easily work with a standard layout.

Once you get into software however, the whole thing changes. You potentially get a blank canvas on which you can design whatever you want, so you have a chance to make a unique interface which is uniquely "you". The big question is: is this a good thing?

Take sound desks; the classic analogue style of desk is common throughout the industry, and is well known to almost every engineer. Here's a typical analogue desk, the Calrec Q series in the Jeremy Kyle studio:

On this desk, the layout is absolutely standard, and fixed. You will always find the same control in the same place, and much like the car, you can't do a lot about it. But take the time, mix a few shows and you'll find that your hand will instinctively find the control that you need. If you were to then go to the other side of the world and walk into a TV studio that used an analogue console, you wouldn't  need long to get up to speed.

Very importantly though, the layout of these desks was designed many years ago, often by the same engineers who would be using them, so the controls are layed out in a sensible, logical way.


If we now leap forward to Calrec's latest desk, you'll see a very important difference. This is one of the Calrec Apollo consoles at MediaCityUK in Salford. The first thing you'll see is that the hardware platform is merely a "display", much like a big computer screen with some controls added, and the "controls" are layed on using software. What you CAN'T see from the picture is the unique thing this desk offers: The ability to display any group of controls in any location. With this desk you can move controls around however you like, much like arranging windows on your desktop, so your interface is whatever you want it to be.

But here's the problem with a totally "soft" interface. If you have the freedom to lay out your controls any way you want, then you'll never develop the muscle memory I mentioned earlier. Without this, you can't just instinctively reach for a control and find it under your fingertips, because that control won't always be there. Unless you discipline yourself to use a fairly constant control layout, you'll always have to think before you reach that hand out. And when you walk into another studio where someone else has set the surface up for what THEY want, you'll be as lost as I was when I first saw the Apollo.

Take away the hardware and go to software only, and you can have a similar problem. Most us are used to software having a "File" menu on the left with a "Save as" option somewhere in the list. This goes back a long way and may not really be the most efficient layout, but look what happens when someone like Microsoft releases Office 2007 which completely changes the location of every single menu option. Is this therefore a good thing or not?

Actually, this problem is probably more common in software than you might think. Take two audio workstations: Pro Tools and Reaper. Pro Tools, give or take a toolbar layout, will be the same wherever you encounter it, and this is what the manufacturer wants. Walk into any recording studio which uses it, and you'll find it looking and behaving exactly the same as any other studio, from the window layout down to the keyboard shortcuts. (Which are, of course, non-customisable. Whether you like them or not....)

Fire up any two installations of Reaper however, and you'll be lucky to find them the same. The whole ethos of Reaper is that you can customise it to the Nth degree, and make it work and look EXACTLY how you want it to, even down to skins which will give you a wooden mixer. But get too used to yours, and you may never be able to work another....

Which is the correct approach? Of course, there isn't really a right or wrong answer; like anything else it's up to you the user to decide. Do you want a standard, or do you want freedom? Do you really want to accept someone else's way of doing things? That's one thing we DON'T do in my department.


And why should we? That wooden mixer is just far too cool.



Sunday, 11 December 2011

Not Just A Good Idea....

In the world of broadcast sound, we often find ourselves coming up against the same problems again and again. This got me thinking; maybe we should publish a more "scientific" statement of some of the things that affect us?

In the last few months I've been posting some of these on Twitter; although the list is by no means complete or exhaustive, it has just reached its tenth entry, so this seems like an appropriate time to have a recap. Each one of these has a story behind it....



Jake's First Ten Laws Of Sound


Jake's 1st Law: It's all about gain. Nothing else matters.

Jake's 2nd Law: The shortest distance between two points is a taut patch-cord.

Jake's 3rd Law: An estate car full of gear will always find it's own level at the first roundabout.

Jake's 4th Law: A limiter with a 24dB gain reduction meter is a challenge.

Jake's 5th Law: Compression ratios of less than 4:1 are for wimps.

Jake's 6th Law: The limiter on an SQN is the finest ever built by the hand of Man.

Jake's 7th Law: Any show you set up should be only as complicated as it needs to be. And NO more!

Jake's 8th Law: It is the absolute right of every UK broadcast sound mixer to peak all the way to 6 if they so choose.

Jake's 9th Law: A studio audience is a wild, capricious beast, and its fader does not wish to stay still. Indulge it!

Jake's 10th Law: Nobody else cares about sound. Until it goes wrong.





Thursday, 15 September 2011

Never leave a man behind....

Not exactly Bravo Two Zero....
Following on from my last article, I ran across this photo the other day. Here you see the sound crew, including myself, from last December's live Coronation Street episode. This was taken on the afternoon before the live transmission, after a week of mostly night rehearsals, so what you see is a bunch of grim, battle-worn and visual-effects-stained veterans who had finally managed to have a wash. And we were all still smiling.

Now what does this have to do with getting that all important start in television? Well, there's a couple of other desirable qualities for a broadcast sound engineer which I didn't mention, and these may be actually the most important ones; the ability to take whatever the job throws at you and still come out smiling at the other end, and the ability to look after yourself and to look out for your colleagues.

Shows like Corrie can only work because they have a dedicated and committed crew who can do this every single day in whatever the conditions throw at them. The live episode was only made possible by a massive team effort by ALL the departments, each one of whom from the director down to the humblest cable-basher had a precise role which they carried out flawlessly in conditions of near-freezing fog and darkness, all the while knowing that their mates were right behind them. And THAT I think is what brings a team together. Even though most of them would never admit it.

Why else would we all be sitting on a pile of fake bricks grinning inanely?

Thursday, 28 July 2011

That's what I go to school for....

A couple of days ago we had some visitors to the Jeremy Kyle studio; a party of work experience teenagers, all of whom were still at school, and who were here under the ITV Inspire scheme. We ran a typical show opening and chat sequence and they had the chance to try their hands at the various positions in a TV studio from director down to cable basher. It was a busy morning, so I didn't get chance to chat with any of them in any detail, but it was clear that they were all very keen, motivated and quick to learn, and quite a few expressed an interest in coming back to sit in on a real show.

Now, this is a little different from our normal visitor profile; our typical guest is late teens or early twenties and is either part way through a university degree or has just completed one. So just how did our young visitors compare to our typical older ones? After all, they didn't have the benefit of several years of Uni education behind them......

They were good. VERY good.

In fact it was quite scary just how quickly they picked up all the different studio roles, and how motivated they all were. Although they may not have had the technical knowledge of a college student, they were all so committed to giving their best that after a couple of rehearsals they ran a show with minimal intervention from us and it worked! In comparison, some (although I must stress not all!) of our older visitors, who should really have impressed us, have had all the drive of a wet lettuce and VERY patchy technical knowledge.

All of which leads me to the real reason for writing this article. When our young visitors come round in a couple of years to looking at THEIR university options, they'll have a lot to consider. Particularly the fact that when they finish their chosen courses and graduate they'll have two things: a piece of paper with "Degree" written on it and a bill in excess of £30,000.

That's a very expensive piece of paper. But of course it will help them get a job in the industry won't it?

Ah. Sorry. It won't. First off, in spite of what many lecturers will tell you, there are hardly ANY actual jobs on offer in TV. There ARE freelance opportunities, but there is a LOT of competition for these.

But you still have to get a degree to get the knowledge to work in TV right?

Oops, sorry again. You don't.

When I chat to our older visitors who are in the middle of these courses, it becomes clear that although they ARE learning useful things, they're doing it in a very long drawn out and expensive way. The knowledge they seem to gain is knowledge which we could teach them in a much shorter period, or they could easily find out for themselves by reading back issues of Sound On Sound. Yes some of them are very keen, motivated and willing to learn, but I suspect they'd be the same if they'd walked in when they were 17 like our visitors this week. Suddenly that piece of paper is looking less useful.

Now please don't think I'm tearing apart the whole higher education system. There's a lot of "serious" degrees which WOULD impress us, such as music, electronics or electrical engineering, but the good sound-based ones seem to be few and far between. Surrey's Tonmeister course (for which you need to be at least grade 7 with an instrument) is highly regarded, as are some from LIPA and Salford University. If you're not looking at a traditional university, then somewhere like the National Film School or Ravensbourne are worth a look. For what it's worth, I'd take a look first at the shorter industry-led courses at somewhere like SSR in Manchester or London ). You'll learn just as much as with a degree and come out with a considerably smaller debt....

It's worth bearing in mind that the standard BBC training, the "A" course as it was known, was a mere 3 months long, yet managed to give you everything you needed to start working in TV the moment you completed it. You were still a trainee for your first three years, but at the end of that period you'd completed all your training AND had nearly three years real-world experience. The entrance requirements to join the BBC as a sound assistant in the first place were O-levels in maths, english and ideally physics. As well as a passion for sound, drive, curiosity and motivation, which were expected even in an 18-year old applicant. If you were the sort of person who took their toys apart as a child to see how they worked, you were perfect.

Hmm. That word "motivation" yet again. It seems to have come up a few times since I started this. And it's worth repeating: our young guests were motivated and keen in a big way, even though they didn't have a huge store of technical knowledge to help them. THAT is what makes a good TV trainee. We'll do the rest......