lovehate: Wolfram Alpha Renews Authority Issues

In the past few years of webolution we've dealt with the advances of technologies and platforms that are greeted with great fanfare. What tends to get lost, or at least glossed over as time passes, are some of the ethical questions that arise from the technologies.

Remember when everyone thought Google was the greatest thing ever? Wait... I guess most people still do. There once was a time were the links that popped up for your searches were suspect. Why those links and not others? Was there some grand design that we were unaware of? Was Google harnessing the power of search results, feeding us what they wanted to feed us and instead of an "above board", transparent list of links where my little homepage could ever have a chance of hitting the top of the list? I remember thinking about this once... for a few seconds anyway, and then I got to my searching. I had ceded my link aggregation to Google.

Wikipedia has crowdsourced knowledge to the point where noted journalists are buying into the entries as though they're gospel. We search. We find entries that we can pretty much guarantee are suspect in one regard or another, yet we cite, source, report, and pass off the amalgamated ponderings of others as the 21st century Funk & Wagnall's. Don't get me wrong; I can appreciate the fact that we've moved from an authority system based on printing press to one based on monitor text. I'm not naive enough to think that print encyclopedias were without bias. I do know, however, that the filtering system to go from research, to edit, to publish at a publishing house is at least tangibly more complex than clicking "edit". We have ceded our knowledge to Wikipedia.

And then past year's fears of Digg manipulating their stories to jack up the ratings of "superusers" or otherwise manipulating their front page results. The community cried "Foul!" and most of us went back there anyway.

Let's digress for a minute though...

Google, Wikipedia, Digg - none of them are bound to ANY public recourse or obligation. These are private companies that may be community-driven in some respects, but are beholden to no one but themselves, or their shareholders. Even still, we have ceded authority to the aggregators... and I'm sadly willing to accept it, because their functionality makes my life easier and I'm far too lazy to pursue the alternative.

So now we are presented with Wolfram Alpha which purports to be a "computational knowledge engine" which is way cool and has potential written all over it. But its existence (and future) raises questions concerning our divested authority. While Google and Digg ask us to accept rankings and Wikipedia asks us to accept knowledge, Wolfram Alpha is asking us to accept solutions. While this may seem to be a fine line (and one that I'm sure I'll be accepting sometime soon) the line does lead down the path to bigger ethical questions than link aggregation.

Is it okay to cede problem solving to the web? Don't get me wrong here. I realize that WA is not apt to solve the world's problems even with the best placed query. My fear is that the ourobouros of crowdsourcing will increase exponentially. When does a Wikipedia entry that's received a million hits, because of its listing in Google, become so accepted that it is "fact"? When does "fact" get integrated into research which, itself, gets re-cited back into Wikipedia and other sources? When does Wolfram Alpha generate solutions based on a "fact" that, in itself, gets republished to create new "research"?

And I guess we can go back to the paper v. digital question where I'm sure someone will correctly assert that this feared pattern has all happened before in paper, ink and press. I'll concede to that. My issue is the filtering. Now it can happen in an hour or a day. Research used to be time intensive and subject to the self-questioning that the research and publishing process would allow. The speed of the web CAN deny such reflection. Where it has always been incumbent on consumers of media to question content providers, the obligation becomes even greater when server-side computation verges on impending nascent stages of AI. Alright, I know were not talking Skynet here, but there's a big difference between "here's where you can go to possibly find the answer" and "here's the answer".

Who's afraid of the Big Bad Wolfram?

wolfram alpha

lovehate: "Getting" Twitter

The greatest thing about the advancements in web technology are that at least, for the time being, they continue. Don't get me wrong, I understand the PC is a tool that will eventually be replaced and the net, as we know it, will become radically different. Just as we went from Grammaphone to turntable to reel-to-reel to 8 track to cassette to CD to download, the PC does have a shelf life as does the this tool we call the web. But, for the time being, the learning curve is immense and expanding.

Perhaps the greatest advantages that I've found lately, however, are not necessarily discovering new websites or technologies, but new ways to use existing ones. Through integration, aggregation, and applications, web programmers are opening up vast new frontiers in web usage and viability.

As an example, I think I'm starting to "get" Twitter. And it's not that I didn't understand the technology or the concept or even the appeal that the platform had to some people. I'd figured there was a way to use the tool properly that I just hadn't figured out (and didn't even necessarily care to take the time understand). In the same way that many non-musicians listen to a jazz improv and find it confusing or self-indulgent noodling. There may even be some who love music and understand the appeal without necessarily it liking themselves. That's kind of where I felt with Twitter.

I was aware of Twitter a long time before I signed up and even longer before I really started exploring it. Going to my page at just seemed stale to me. It seemed, for the longest time, like a weak pretender to a sole aspect of Facebook that was cool enough but not compelling. And I followed the requisite Twitterati to see them lifecasting (which I abhor) and tweeting pearls of wisdom to the adoring masses who sat around all day praying for the @reply. But, as anything on the web, one way communication isn't going to cut it and absolutely no one (I mean zilch) was following me.

I also knew that the easiest way to get followers was to ramdomly follow 10,000 people in the hopes that 1,000 follow you back. I've never been like that on MySpace or Facebook, so I certainly wasn't going to do that on Twitter. I much prefer to pursue an organic growth of followers and, at the time of writing this, I am following 117 people and have 114 followers. Of those followers I assume a certain percentage of spammers and dead profiles. I'm thinking that somewhere around the 100 mark is the stage one critical mass it took for me to find a balance between being just updates from Twitterati and more meaningful content from people that I have formed some sort of relationship with, even if it's just online. I suppose I could have reached higher numbers quicker, but I don't know that I would have cared about what anyone was saying at that point and, as such, may have lost interest altogether.

In addition to reaching this first step of discovering the benefits and relative potential of Twitter in capture my interest in more than an obligatory refresh or two every hour to see how many dozen tweets Scoble had up, the evolution of the API and its associated tools became what truly galvanized this new experience. I found Tweetdeck and, in doing so, gained a whole new appreciation from Twitter by simply being able to visualize the workings and the interactions. I started up search columns devoted to specific hastags and events. I was starting to add followers based on shared interests or, at the very least, evidence of an ability to contribute to something I cared about instead of randomly throwing darts at a print out of the fail whale.

And in learning this first step where I'm getting more out of Twitter than I thought possible, perhaps the most important thing I've learned about this, and other microblogging platforms, is that the API rules the roost. The explosive evolution of snippet commentary has all of its value in aggregation, and in aggregation the value is in the content, and in its content the value is in the users. I know enough to know that a thousand or ten thousand random follows on Twitter will not get me any of the value that 100 thoughtfully chosen contacts will.

Be it Twitter, Facebook, MySpace, Plurk, or any social network, you and your content are indistinguishable. Just as when you are not in the room, all that remains is the story of you, social networks are ALL story. The stories are told through podcasts, blog posts, references, subreferences, suggestions, advice, maxims, insights, and links. The snippets are you. How many close friends do you have in real life? How many regular friends? The interaction with one friend over one drink on one night of the week will give you more content and sources for relevant aggregation than a thousand random snippets.

I think I've started to "get" Twitter, but, even better, my hope is that I haven't even started to "really get it".