thinglets: Searching for "Pictures" - The 19% Rule

Was just noodling around doing some research for a presentation next week and decided to get a bit deconstructionist by searching Google Images for the word "pictures". I'd have to say the first page results were more expected than surprising, but did probably represent a pretty accurate representation of web image searches:

3 thumbnails of a sexual nature

11 thumbnails of animals (many with captionz)

4 thumbnails of pics that are just kinda cool

2 thumbnails of pics that are weird shit

1 thumbnail of a webpage graphic to denote other pictures

So, out of 21 pics, I'd probably only have any use for the 4 that are kinda cool. That's less than 19%, and this value has become my new non-scientific de facto standard of how many images on the web have any real merit whatsoever.

lovehate: Wolfram Alpha Renews Authority Issues

In the past few years of webolution we've dealt with the advances of technologies and platforms that are greeted with great fanfare. What tends to get lost, or at least glossed over as time passes, are some of the ethical questions that arise from the technologies.

Remember when everyone thought Google was the greatest thing ever? Wait... I guess most people still do. There once was a time were the links that popped up for your searches were suspect. Why those links and not others? Was there some grand design that we were unaware of? Was Google harnessing the power of search results, feeding us what they wanted to feed us and instead of an "above board", transparent list of links where my little homepage could ever have a chance of hitting the top of the list? I remember thinking about this once... for a few seconds anyway, and then I got to my searching. I had ceded my link aggregation to Google.

Wikipedia has crowdsourced knowledge to the point where noted journalists are buying into the entries as though they're gospel. We search. We find entries that we can pretty much guarantee are suspect in one regard or another, yet we cite, source, report, and pass off the amalgamated ponderings of others as the 21st century Funk & Wagnall's. Don't get me wrong; I can appreciate the fact that we've moved from an authority system based on printing press to one based on monitor text. I'm not naive enough to think that print encyclopedias were without bias. I do know, however, that the filtering system to go from research, to edit, to publish at a publishing house is at least tangibly more complex than clicking "edit". We have ceded our knowledge to Wikipedia.

And then past year's fears of Digg manipulating their stories to jack up the ratings of "superusers" or otherwise manipulating their front page results. The community cried "Foul!" and most of us went back there anyway.

Let's digress for a minute though...

Google, Wikipedia, Digg - none of them are bound to ANY public recourse or obligation. These are private companies that may be community-driven in some respects, but are beholden to no one but themselves, or their shareholders. Even still, we have ceded authority to the aggregators... and I'm sadly willing to accept it, because their functionality makes my life easier and I'm far too lazy to pursue the alternative.

So now we are presented with Wolfram Alpha which purports to be a "computational knowledge engine" which is way cool and has potential written all over it. But its existence (and future) raises questions concerning our divested authority. While Google and Digg ask us to accept rankings and Wikipedia asks us to accept knowledge, Wolfram Alpha is asking us to accept solutions. While this may seem to be a fine line (and one that I'm sure I'll be accepting sometime soon) the line does lead down the path to bigger ethical questions than link aggregation.

Is it okay to cede problem solving to the web? Don't get me wrong here. I realize that WA is not apt to solve the world's problems even with the best placed query. My fear is that the ourobouros of crowdsourcing will increase exponentially. When does a Wikipedia entry that's received a million hits, because of its listing in Google, become so accepted that it is "fact"? When does "fact" get integrated into research which, itself, gets re-cited back into Wikipedia and other sources? When does Wolfram Alpha generate solutions based on a "fact" that, in itself, gets republished to create new "research"?

And I guess we can go back to the paper v. digital question where I'm sure someone will correctly assert that this feared pattern has all happened before in paper, ink and press. I'll concede to that. My issue is the filtering. Now it can happen in an hour or a day. Research used to be time intensive and subject to the self-questioning that the research and publishing process would allow. The speed of the web CAN deny such reflection. Where it has always been incumbent on consumers of media to question content providers, the obligation becomes even greater when server-side computation verges on impending nascent stages of AI. Alright, I know were not talking Skynet here, but there's a big difference between "here's where you can go to possibly find the answer" and "here's the answer".

Who's afraid of the Big Bad Wolfram?

wolfram alpha

thinglets: the power of tagging

If you want people to check out your new recipe for chocolate chip cookies, just put "Watchmen" in the tags and miracles will abound. In this case, I actually did a podcast about The Watchmen, so I wasn't cheating, but I am shocked. But, seeing as this is a post about the power of putting "Watchmen" in a tag, I'll put it in this one too. I invite others to make reference to this post and tag accordingly.

tagging