thinglets: How Do I Do This Shit I Do?

A question by Cheryl following the Why Do I Do This Shit I Do? podcast inspired the following post on methodology and gear.

"I am astounded at the number of podcasts you put together in a week -- how long does each one take? Do you script any of them? What about production, is it just record and go? What software/hardware are you using?"

My long-winded answer... because I don't do anything short-winded.

Nothing is scripted anymore. If you were to hear some of the lovehate podcasts from a year and a half ago, you'd find they were all readings of the blog posts - which used to be LONG. I used to keep the "scripted" and impromptu podcasts numbered separately, but as my time for writing became scarce I went completely extempore and merged the two streams at Podcast #42 of each and call the next podcast Episode 85. Most of the time, other than a basic premise to kick off the festivities, I have no idea where the LHT podcasts will end up.

The current lovehatethings podcasts are generally recorded in real time (10m) and then I drop some mildly meaningful music in the background, save it to mp3 and post it - total time about 25m.

For my other podcasts, DyscultureD and TV, Eh are just riffing off of whatever links we've collected. Best Episode Ever is riffing off of the show's Wikipedia page and personal recollections.

LHT is recorded directly into Cool Edit Pro 2.1 which is an old program that is absolutely brilliant and uses a ridiculously small footprint of the processor. The program is also used to edit all of the other podcasts. DYS and TV, Eh are recorded over Skype using a program called Call Graph.

The time investment for Best Episode Ever is very similar to lovehatethings unless I'm recording an episode with someone else over Skype.

TV, Eh editing is generally not too demanding unless I have to post-process an interview. I generally just add opening theme and closing theme and mixdown to mp3.

DyscultureD takes the longest just because we do segments and break between them. The breaks necessitate some editing and insertion of stingers. Since the new theme song, I've also taken a couple of minutes to record the intro as well. A no-nonsense quick edit of the podcast is usually 30m, but the process takes quite a bit longer as my upload speeds often add 20 minutes to everything.

As for hardware, I've got a REALLY fast Dell PC box with 9GB of RAM that helps speed the process. My newest toy is the Australian Rode Procaster microphone with shockmount and boom that I've had for about half a year now. (see top of post)

I also use a Behringer Eurorack UB802 mixer as I really prefer the sound of a mic going directly into the analog port instead of USB. (Maybe it's just the old musician in me, though I do still have a USB Blue Snowball and a USB headset mic for trips with my laptop. The Eurorack also facilitates anything else I want to plug in if I'm going to record music and add a keyboard or other instrument.

Posterous has really made everything else easy. I'd rather spend time on content creation instead of webpage coding, so I'm relieved that the advent of Posterous and my relaunch into blogging and subsequently podcasting had a serendipitous synchronicity. 

Probably more than anyone wanted to know. While it may sound a complex, I have also recorded about ten podcasts from Las Vegas casinos with my iPhone and nothing else. Engaging content and style will trump gear any day of the week.

lovehate: The Internal Organ Printer... Yes You Read That Right

picture via The Economist

Since we've recently discovered that, using crazy polymer plastics, printer-like devices can actually "print" objects (including the parts to reproduce themselves), I suppose we shouldn't be surprised that they'd make printers that could produce other things. I was kind of hoping for world peace or a cure for cancer. I suppose I was closer to the latter than the former. Now there's a printer that prints body parts and internal organs.

And what brilliant tech-sounding, intimidating name could we use for such a device? Surely the most appropriate moniker would contain a bunch of obscure letters, hyphens, and numbers with a word that justified the device's $200,000 price tag. I would think the best approach would be the "ExoHyperTron 4XGi".

Instead may I present the amazing technical marvel that is the Organovo!?!

In an tepid tribute to the unimaginative mind that created the elusive "Unobtainium" in Avatar, Organovo sounds more like a mastabatory device than a medical marvel. From a recent article in The Economist:

"Dr Atala... is experimenting with inkjet technology. The Organovo machine uses stem cells extracted from adult bone marrow and fat as the precursors. The cells are formed into droplets 100-500 microns in diameter and containing 10,000-30,000 cells each. The droplets retain their shape well and pass easily through the inkjet printing process. A second printing head is used to deposit scaffolding—a sugar-based hydrogel. Some researchers think machines like this may one day be capable of printing tissues and organs directly into the body."

How frustrating is it when your print cartridge runs out in the middle of the an essay that's due that afternoon? Can you imagine a surgeon sending out an attending down to the kiosk in the local mall to wait for your local Warcraft Guild leader to drill a hole and use a syringe to refill the unit with stem cells?

We've heard for years how printer ink is the most expensive substance in the world (how a few milliliters cost between $20 and $70). I would imagine the value might grow a bit if you threw stem cells into the mix. Would your new cartridge have to be CYMK-SC? One way or another, I have a feeling this will be a package option with a new Dell tower in five years... hell, I suppose one could buy the printer, print out a Dell tech and the parts, and have him build the PC for me.

DyscultureD Podcast Thirty Eight: The Double Down

This week's episode!

My other web outlet is at DyscultureD where we do a weekly podcast on all things right and wrong with pop culture. Follow the link above to this week's episode... show notes below.

Full Dysclosure

  • The scratch ticket affair that is the MJ memorial
  • Bell buys Virgin Mobile and The Source
  • BNN buckles on IP and copyright video clips
  • Pirate Bay sells short
  • Alternate Bit Torrent options
  • Browser Wars Part @?$#%
  • Canadian made TV hitting US Big 3
  • Cheap Trick’s not-so-cheap trick in music promotion

Websites of the Week

  • Mike - bookseer.com - a simple recommendation engine for your NEXT read
  • Anth - theusermanualsite.com - ever lost a user manual for a gadget or appliance? Find it here.

Music

Laura Smith - I Spy a Monster - www.laurasmithmusic.com

thinglets: Bohemian Rhapsody Old School Computer Remix

I think Freddie Mercury would be proud of the time and dedication it must have taken to produce this - although he may have preferred some spandex be involved. Just goes to show how one can find music anywhere. It's a little bit hypnotic as well. I can't believe I just watched a tech junkyard create music for six minutes... I need help!

Podcast Thirty Four: Beware of Geeks Bearing Gifts

Concerning employers trying to become our new social networks, tech blog entries full of sound and fury, signifying nothing, Comcast pays us to watch porn and the how I'm preparing to blow out the last candle on the integrity of popular music.

lovehate: Auto-accompaniment and the Failures of Simulation

I've been playing piano since I was five and, while there have been short periods when performing music has fallen out of my interests, I have almost always had an appreciation for a completely live performance. Such a performance can include anything from a single instrument and voice all they way up to a full orchestra.

I remember playing as a teenager in the 80s-drenched synth-oriented dance pop that pervaded the charts. I remember even buying into the concept of a synthesizer or two but hated the concept of the dreaded sequencers and samplers that would allow even the most inept players to spout forth with "cool" sounding patterns and loops. I could tolerate the idea of a synthesizer making sounds that were unique to the instrument itself and not trying to generate something else. With the persistent adoption of drum machines and string patches and horn sections and poorly-modelled electric pianos, I retreated further into a state that I considered a bit of musical elitism: a piano sound should come from a piano, a drum sound should come from a drum, and a bass guitar sound should come from a bass guitar.

Don't get me wrong, I understand the attraction of simulation. I have recorded songs where I've used a keyboard to create multiple music tracks, but always, in my head at least, the exercise was just that - an exercise. Call me old-fashioned when it comes to music, but I believe there should be something organic to musical sound. And this from a guy who grew up idolizing Keith Emerson and his endlessly-tweakable envelope filters.

As I grew older, I developed a certain tolerance for auto-accompaniment, but always with a sense of kitsch. The idea of the cheesy home organ with beat generator and portamento was to being smiled at and laughed with instead of laughed at. I am willing to listen to someone satrize a traditionally serious song by giving it the Wurlitzer treatment.

And it was with all this derision that I shook my head in disbelief when I learned of Microsoft's Songsmith software during CES last month. While this product's limitations have been shown to glorious and humorous effect by copying the vocal lines of past hits into its engine and watching the generic "reggae" or "soft rock" accompaniment get triggered, could anyone have really expected anything different... you know what? I was exepecting better.

While I believe the concept abhorrent and completely against all of my sensibilities about music, I fully expect that the technology is not out of reach to mesh anyone's random vocalizing with a very solid sounding accompaniment. I anticipate that no matter how bad someone sings, the software's engine should, on the fly, fix any out of tune notes and quantize the rhythmless until they sound inoffensive. I expect that music AI has advanced far enough that realistic-sounding instruments can be modelled in real time to sound at least as good as many of the mediocre ballads that are in the top ten of most pop music charts.

I expect we're on that threshold and, while it should scare the hell out of me, I've discovered I really don't care because if some out-of-tune arhythmic cellar dweller can one day sell a million copies of a song they produced in their basement, and maybe flip the RIAA and the Big Four the finger while doing so, I'll buy a cake and with wry, smiling dismay blow out the last candle on musical integrity.

funmaker