First, I wanted to recap a few decisions we made in the pre-production process that came up in reader comments to last month’s story. (See the link at the end of the story to read Pt. 1)
One reader pointed out that we used a stationary camera position for the 5:00+ song and suggested having multiple cameras would have made the video more exciting. While this is true, a multi-camera shoot would have required more time and editing than our project allowed for. The purpose of the video was to see what type of quality we could get sticking to a simple, one-camera, one-take approach.
Another reader questioned why we bothered to record the audio to a separate device, instead of taking the audio mixing board output and plugging it directly into the cameras. While this is an option, the audio recording capabilities of most consumer-grade video cameras do not offer the type of headroom that will make for good audio reproduction. Just look at the thousands of self-made performance videos on YouTube where this approach was tried to hear poorly balanced or distorted audio tracks.< Alternately, using the camera’s on-board microphone rarely results in a decent audio track for a full band. If you are a solo performer with acoustic guitar and vocals, you might get a decent result using the camera’s on-board mic, but using higher quality external mics and a separate audio recorder will always yield a better-sounding audio track for a band. That’s why we chose to do a separate live audio mix to the Zoom recorder and match the audio and video recordings up in post-production. The live mix came out clean and very representative of what the band actually sounds like live. This was one of the goals that the band was looking for, to capture what they sounded and looked like in a live setting.
While the band was packing up their equipment after the shoot, I pulled the SD Memory card out of the Zoom recorder and, using a universal card reader, downloaded the audio mixes for all four complete takes of “Before This Began” to my computer.
At home that night, I plugged in both of our cameras and, one at a time, transferred the video recordings to my Mac. The trick to making this a snap is to first open the iMovie application (which comes free on all new Macs). When I plugged in the camera’s USB cable, the software automatically recognized the camera and asked me if I wanted to import the video files into iMovie.
After clicking “Yes,” it took about five minutes to download the normal resolution video we shot with the Canon F200. Then I repeated the process for the HF200 camera, and that took a little more than 30 minutes to download the larger, high definition video files.
A few days later, Dan and I got together at my house and started checking the audio mixes. All four complete takes came out fine, but as we expected, take four had the best overall audio mix. The audio was saved as 16-bit, 44.1K .WAV files, with the complete song taking 59.7 MB of disc space. Next, we reviewed take four of the video recordings from both cameras. What we noticed was that the tripod we had used under the FS200 didn’t work properly, so each time the cameraman zoomed or panned the camera, there was a slight, but perceptible amount of jitter, which ended up rendering the footage from the FS 200 unusable. All was not lost, however, since our second camera, the HF 200, had a much sturdier, fluid head tripod that performed as expected.
In hindsight, shooting with two cameras was a good safety measure: had we only used the FS 200 video, we would have had to reschedule another shoot. A simple, but essential piece of equipment, such as a tripod, should never be taken for granted when making your own DIY videos.
The second thing we realized was that we had used the high definition mode on the HF 200 and the resultant video had much greater contrast and definition than the footage from the FS 200. So the second lesson is, if you can borrow a camera that is high definition-capable, take advantage of that to capture the original video in the highest possible resolution.
To complete the post-production process we faced four distinct linear tasks. These were:
1. Syncing up the audio recording with the video
2. Trimming the beginning and end of the selected video recording
3. Adding opening titles and the band’s MySpace address as closing credits
4. Posting the finished video to YouTube.
At the start of our post-production session, we had a complete video recording of the song and a separate complete audio recording of the song. The first thing we learned was that iMovie requires you to start the editing process by opening a “New Project,” which would become the finished version of the video. In my opinion, this is one of iMovie’s best design features, because as you edit your video in the new “Project,” your original source video is never changed, so you always have that as a back up.
Step One: Syncing Audio with Video
We had relied on our own DIY substitute for the electronic pulse and SMPTE time code that would have been generated by a professional clapper system. By focusing each camera on me while I held up a sign with the take number and counting down before clapping my hands, we hoped that we could adequately synchronize our separate audio recording to the video. Now it was time to find out if this would really work.
The raw video showing in our project window had the compressed and echo-y sound that was recorded using the HF200’s on-board mic. Dan went to the Edit menu and selected “Detach Audio,” which separates the camera audio from the video. This audio now showed it as a separate colored band underneath the video strip. Since we weren’t going to use any of the on-board audio, we deleted the track.
Next, we minimized the iMovie app, opened the folder with the separately recorded audio mixes, and dragged the audio track onto the computer desktop. Then we dragged the .WAV file over into iMovie and dropped it right over the camera audio track in the project window.
While the audio import was a snap, getting it in sync with the performance proved to be a bit more challenging.
The audio mix landed roughly three seconds ahead of when the video performance actually began. Dan was able to click on the beginning of the audio track and drag it to the right (later in the project window) and after a few tries, we had the visual of me clapping fairly close to the sound of the handclap on the audio track from the Zoom.
It only took about ten seconds of viewing before we realized that we were still out of sync, but by a much smaller amount.
The other thing we noticed was that the newly imported audio volume was a bit low. By double-clicking on the project video, the Inspector pop-up menu box appears, which allows you to fine tune video or audio for your project. Dan selected “Normalize Clip Volume,” which boosted the audio level nicely for the overall audio mix.
Watching the drummer and matching up the sticks hitting his hi-hat and cymbals was one good way to check our sync. The song’s chorus also featured some aggressive chords on guitar, which was another visual “hit” that had a correspondingly clear audio element we could check. The third cue we looked at closely was our singer’s mouth, and the sound of her breathing on the audio mix. What we discovered as we worked to accurately sync up the audio and video was that the original hand clap sync only got us so close.
In other words, the video and audio of the handclap would appear perfectly in sync at the top, but when we closely watched the band’s performance, and the “hits” we were tracking, we found that we were still off by a few frames. (Each second of video is broken down into thirty smaller units, called frames.) It actually took us about 30 minutes of painstaking experimentation before we finally had the sync close enough to move on to the next step.
Although it was tedious and time-consuming, we had proven that we could marry a separate audio recording to the video successfully, resulting in a much better audio track than the on camera mic had captured. Dan and I agreed the extra hassle in recording audio separately and syncing up had been worth it, as our mix was clean and had a full frequency sound that no on-board camera mic would have captured.
Step Two: Trimming the Video Recording
Dan set the spots where we wanted the live performance to begin about five seconds before the music began. The most intuitive way to do this is to use the software’s “Precision Editor,” which shows as a small gear wheel when you roll your mouse pointer over your project’s video clip. Then you simply drag the little handles at the front and back of your project video to determine where the project’s video will begin and end. (If you want more precision, holding down Option and then using the left or right arrow allows you to adjust the Trim Point one frame at a time.)
The day of the video shoot, I had instructed the band to freeze for about fifteen seconds between takes, being sure to mute their instruments completely and avoid talking or making any noise. This proved helpful as Dan set the start point at five seconds before the song started (to allow for a fade in to set the stage) and ended (for the fade out). We played back the entire video recording back in high definition to confirm that we liked the start and ending points before moving on to adding the titles.
Part Three will detail the process of adding titles and publishing the video to YouTube.
Special thanks to Dan Faughnder, Erik Urbina, Ralph Roberts, Middagh Goodwin and the band Sugar Water Purple for collaborating on this project. Thanks also to James Gonzalez, Jeff Crawford, Jace Hargis and Dave Chase for the loan of various pieces of video and audio gear.
by Keith Hatschek for discmakers.com