I came across an article the other day (originally found via Pharyngula), Where is Everyone?, which aims to track the trend in how information is found and accessed over the past 200 years. This is a very ambitious, and very interesting project. The article starts with a colourful graph, and proceeds to analyse the implications of the graph:
Of course, somebody asked the author (a Thomas Baekdal) what data the chart was based on, and he freely answered:
The graph was based on combination of a lot of things, a number of interviews, general study, general trend movements, my experience etc. I cannot give you a specific source though, because I used none specifically.
The graphs before 1990 are all based on interviews, and a large number of Google searches to learn about the history of Newspaper, TV and Radio - and more specifically, what people uses in the past. The graphs from 1998 and up to today, is based on all the things that have happened in the past 11 years, of which I have probably seen 1000 surveys (it is what I do for a living). And the graph from 2009 and forward is based on what I, and many other people predict will happen in the years to come.
One very important thing though, this is not a reflection of my opinion. This is the result a careful analysis. There are always variations, and different types of people. But I believe that this graph accurately reflects consumer focus.
…Have you ever seen such tripe in your life? Merely using different sources and methods for different time periods would introduce uncertainty to the results, but at least this would be inevitable. But, given this guy’s process, that doesn’t even register as a problem. He
cannot give you a specific source though, because [he] used none specifically. He has
probably seen 1000 surveys of the trends of the past 11 years, but can’t be bothered to cite even one. The x-axis has an arbitrarily compressed scale, skewing the shape and the speed of the trends. And, most damning of all, he gives no indication of how he measures the ‘information’ metric! (The only truly objective measure of information that I know of is Shannon’s ‘bits of entropy’, which is certainly very concrete—but there’s no indication that this is what’s meant, nor can I think of any way in which a person’s total information input can be objectively measured in these units.) How on Earth can anyone analyse the graph critically without knowing what the numbers measure?
The answer to the last question, of course, is that it’s impossible, except in the sense that I am analysing it critically here: Calling it bunk. By presenting a graph, he gives himself a veneer of scientific responsibility (
Look, I have data!), but since the graph doesn’t actually objectively represent anything (so far as the reader can tell), it’s really just a distraction, an attempt to gain enough credibility in the reader’s mind that the purported analysis that follows is swallowed whole.
And he has the gall to claim that
this is not a reflection of my opinion. This is the result a careful analysis.
If he hadn’t pretended this (id est, if he had said up front that this is a mashup of various analyses of a practically unquantifiable commodity, but that he hopes that his argument, once followed through, will persuasively show a genuine trend), I might have given him some respect, but given what he actually did, he is either a fool or a liar. Either option should persuade you not to take him seriously.
To address the article as though it weren’t total bunk, his extrapolation into the future is on shaky grounds for reasons that should be painfully obvious even to someone who does buy into the graph: By extrapolating current trends into the future, he seems to be ignoring the fact that the big new things of recent years—social networks, social news, etc.—came out of nowhere and took internet culture by storm. What the internet does—the most important thing it does—is enable distribution of information to vast numbers of people with virtually no marginal cost. Logistically, I can reach a thousand people as easily as one; a million almost as easily as a thousand; a billion with only a little more difficulty with a million. When someone does come up with the next killer idea—the next Facebook or Twitter or Google, or whatever it may be—it can explode at an incredible rate. On the internet, where no one is limited by broadcast range, print batch size, or radio band constraints, the primary limiting factor is user interest. The Next Great Thing may grow slowly and incrementally, or it may explode geometrically, as fast as server capability can handle (and how fast that is depends on what the Next Great Thing is, which of course we don’t know).
In other words, even if I try to buy into the general idea, I think that his predictions are about as reliable as any ever are in futurology, and if I view the whole thing critically, it’s bunk. Either way, I can’t say I am impressed.