photo courtesy gizmodiva.com
A question by Cheryl following the Why Do I Do This Shit I Do? podcast inspired the following post on methodology and gear.
"I am astounded at the number of podcasts you put together in a week -- how long does each one take? Do you script any of them? What about production, is it just record and go? What software/hardware are you using?"
My long-winded answer... because I don't do anything short-winded.
Nothing is scripted anymore. If you were to hear some of the lovehate podcasts from a year and a half ago, you'd find they were all readings of the blog posts - which used to be LONG. I used to keep the "scripted" and impromptu podcasts numbered separately, but as my time for writing became scarce I went completely extempore and merged the two streams at Podcast #42 of each and call the next podcast Episode 85. Most of the time, other than a basic premise to kick off the festivities, I have no idea where the LHT podcasts will end up.
The current lovehatethings podcasts are generally recorded in real time (10m) and then I drop some mildly meaningful music in the background, save it to mp3 and post it - total time about 25m.
LHT is recorded directly into Cool Edit Pro 2.1 which is an old program that is absolutely brilliant and uses a ridiculously small footprint of the processor. The program is also used to edit all of the other podcasts. DYS and TV, Eh are recorded over Skype using a program called Call Graph.
The time investment for Best Episode Ever is very similar to lovehatethings unless I'm recording an episode with someone else over Skype.
TV, Eh editing is generally not too demanding unless I have to post-process an interview. I generally just add opening theme and closing theme and mixdown to mp3.
DyscultureD takes the longest just because we do segments and break between them. The breaks necessitate some editing and insertion of stingers. Since the new theme song, I've also taken a couple of minutes to record the intro as well. A no-nonsense quick edit of the podcast is usually 30m, but the process takes quite a bit longer as my upload speeds often add 20 minutes to everything.
As for hardware, I've got a REALLY fast Dell PC box with 9GB of RAM that helps speed the process. My newest toy is the Australian Rode Procaster microphone with shockmount and boom that I've had for about half a year now. (see top of post)
I also use a Behringer Eurorack UB802 mixer as I really prefer the sound of a mic going directly into the analog port instead of USB. (Maybe it's just the old musician in me, though I do still have a USB Blue Snowball and a USB headset mic for trips with my laptop. The Eurorack also facilitates anything else I want to plug in if I'm going to record music and add a keyboard or other instrument.
Posterous has really made everything else easy. I'd rather spend time on content creation instead of webpage coding, so I'm relieved that the advent of Posterous and my relaunch into blogging and subsequently podcasting had a serendipitous synchronicity.
Probably more than anyone wanted to know. While it may sound a complex, I have also recorded about ten podcasts from Las Vegas casinos with my iPhone and nothing else. Engaging content and style will trump gear any day of the week.
Since we've recently discovered that, using crazy polymer plastics, printer-like devices can actually "print" objects (including the parts to reproduce themselves), I suppose we shouldn't be surprised that they'd make printers that could produce other things. I was kind of hoping for world peace or a cure for cancer. I suppose I was closer to the latter than the former. Now there's a printer that prints body parts and internal organs.
My other web outlet is at DyscultureD where we do a weekly podcast on all things right and wrong with pop culture. Follow the link above to this week's episode... show notes below.
Websites of the Week
Laura Smith - I Spy a Monster - www.laurasmithmusic.com
I think Freddie Mercury would be proud of the time and dedication it must have taken to produce this - although he may have preferred some spandex be involved. Just goes to show how one can find music anywhere. It's a little bit hypnotic as well. I can't believe I just watched a tech junkyard create music for six minutes... I need help!
An impromptu episode that asks why people (read: gearheads) are so interested in seeing new products "unboxed". I can put my $49 Printer/Scanner/Copier back in the box for you so you can see it unboxed in all its glory.
Concerning employers trying to become our new social networks, tech blog entries full of sound and fury, signifying nothing, Comcast pays us to watch porn and the how I'm preparing to blow out the last candle on the integrity of popular music.
I've been playing piano since I was five and, while there have been short periods when performing music has fallen out of my interests, I have almost always had an appreciation for a completely live performance. Such a performance can include anything from a single instrument and voice all they way up to a full orchestra.
I remember playing as a teenager in the 80s-drenched synth-oriented dance pop that pervaded the charts. I remember even buying into the concept of a synthesizer or two but hated the concept of the dreaded sequencers and samplers that would allow even the most inept players to spout forth with "cool" sounding patterns and loops. I could tolerate the idea of a synthesizer making sounds that were unique to the instrument itself and not trying to generate something else. With the persistent adoption of drum machines and string patches and horn sections and poorly-modelled electric pianos, I retreated further into a state that I considered a bit of musical elitism: a piano sound should come from a piano, a drum sound should come from a drum, and a bass guitar sound should come from a bass guitar.
Don't get me wrong, I understand the attraction of simulation. I have recorded songs where I've used a keyboard to create multiple music tracks, but always, in my head at least, the exercise was just that - an exercise. Call me old-fashioned when it comes to music, but I believe there should be something organic to musical sound. And this from a guy who grew up idolizing Keith Emerson and his endlessly-tweakable envelope filters.
As I grew older, I developed a certain tolerance for auto-accompaniment, but always with a sense of kitsch. The idea of the cheesy home organ with beat generator and portamento was to being smiled at and laughed with instead of laughed at. I am willing to listen to someone satrize a traditionally serious song by giving it the Wurlitzer treatment.
And it was with all this derision that I shook my head in disbelief when I learned of Microsoft's Songsmith software during CES last month. While this product's limitations have been shown to glorious and humorous effect by copying the vocal lines of past hits into its engine and watching the generic "reggae" or "soft rock" accompaniment get triggered, could anyone have really expected anything different... you know what? I was exepecting better.
While I believe the concept abhorrent and completely against all of my sensibilities about music, I fully expect that the technology is not out of reach to mesh anyone's random vocalizing with a very solid sounding accompaniment. I anticipate that no matter how bad someone sings, the software's engine should, on the fly, fix any out of tune notes and quantize the rhythmless until they sound inoffensive. I expect that music AI has advanced far enough that realistic-sounding instruments can be modelled in real time to sound at least as good as many of the mediocre ballads that are in the top ten of most pop music charts.
I expect we're on that threshold and, while it should scare the hell out of me, I've discovered I really don't care because if some out-of-tune arhythmic cellar dweller can one day sell a million copies of a song they produced in their basement, and maybe flip the RIAA and the Big Four the finger while doing so, I'll buy a cake and with wry, smiling dismay blow out the last candle on musical integrity.