haggholm: (Default)

Autism is a pretty mysterious condition. No one really knows what causes it (all we really know for sure, after all this testing, is that whatever else it does, the MMR vaccine definitely doesn’t cause it…), but it’s thought to be part genetic, part environmental. A Swedish study on indoor air pollutants has now suggested that, although the data are very tentative, vinyl flooring may increase the risk of autism!

The researchers found four environmental factors associated with autism: vinyl flooring, the mother's smoking, family economic problems and condensation on windows, which indicates poor ventilation.

Infants or toddlers who lived in bedrooms with vinyl, or PVC, floors were twice as likely to have autism five years later, in 2005, than those with wood or linoleum flooring.

Whether the link is real is, as the researchers very frankly point out, as yet unknown, and only further studies can reveal it. I find this interesting to consider, however, as a case study in how easy it is to get the wrong impression from results like these. There’s a number of interesting traps to fall into.

  1. It’s fairly likely that someone will report on this, or already has, under a headline like Research finds link between vinyl flooring and autism, giving the impression that it’s clear-cut, whereas the single most clear-cut message of this study is that it ain’t so.

  2. Correlation does not imply causation, and even when there’s causation, we have to make sure we get it the right way around. As one commenter to that article pointed out, autistic children tend to be extremely preoccupied with textures. Even if there’s a direct link between vinyl flooring and autism, that doesn’t mean that the former causes the latter. Maybe families with autistic children prefer vinyl flooring because it makes the children happier, and so in a sense, autism might cause vinyl flooring!

  3. Notice that they found four, that’s four environmental factors associated with autism: vinyl flooring, the mother's smoking, family economic problems and condensation on windows. However, these variables were not controlled for, and may not be independent.

    What does this mean? Well, it may be that any or all of these variables are connected: Maybe poorer people are more likely to smoke, less likely to afford good ventilation, and less likely to afford nice hardwood floors. If any of these things really does increase the risk of autism, the other variables will be associated with it: If, say, the mother’s smoking causes autism, and more poor mothers than wealthy mothers smoke, then vinyl floors and everything else associated with poor people shows up as associated with autism in the statistics. But while the correlation is there and is real, there is (in my example) no causative relationship at all.

    This sort of thing is always a problem with any studies, especially (I believe) when randomisation is poor or sample sizes are small. These are four known and named variables that may reasonably be correlated. What would we have thought of this article if they hadn’t mentioned smoking, wealth, or ventilation? It would have painted a very different picture. And it’s not necessarily dishonesty or editorial brevity that leaves variables out of the equation: Sometimes relevant data just aren’t measured—what if the study hadn’t asked about wealth or smoking?

    I’m reminded of the very poorly thought-out article I read a little while back that claimed that light pollution at night from all the street lights and so forth lead to—I don’t recall: Some health problem or other. However, light pollution goes with industrialisation, and the number of variables you introduce when you compare a more industrial to a more agricultural country is ridiculously large. The article made no mention of those at all, but spoke as though there had to be a direct causal link from light pollution to the health issue at hand (which is why I consider it such a poor article).

  4. The study was not designed to look for these data, which means that we must suspect data mining. Data mining refers to digging through a set of data looking for any relationships, whether the ones originally examined or not. The problem is that some relationship will always be found.

    Suppose, for instance, that a study is in some global sense 99% reliable. What does this mean? It means that we set out to discover whether X causes Y, and if the study says yes, we can be 99% certain that we’re right. On the flip side, there’s a 1% chance that we’re wrong. Now suppose that, since we have all these statistics anyway, we decide to check of X causes Z, or A causes B…and so on. For every single one of these, we may (very generously) be 99% certain that it’s correct, but if we look for 100 different relationships, we know that we’ve probably got at least one wrong!

    In fact, we’re 73% likely to have got at least one wrong, and that’s with a 99% confidence level and the very generous assumption that the data are as reliable in unknown areas. In reality, I expect that will often not be the case: Even if I design my study to control for a lot of variables surrounding the hypothesis I set out to explore, I can’t possibly do the same for a bunch of hypotheses someone constructs from my data after the fact.

    This is why data mining is frowned upon in scientific studies. We can look at data like that and find correlations that intrigue us, and use those correlations to inspire new studies—just as this Swedish study means that it might not be a bad idea to look at possible connections between vinyl flooring (and phthalates) and autism…but we shouldn’t be fooled into thinking that they necessarily mean anything, because we know that if we look hard enough at any set of statistics, we will be able to find some spurious connections.

haggholm: (Default)

I just finished reading The Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for the Ultimate Theory by Brian Greene. It was a good book, it was well-written, and it made superstring theory about as comprehensible as I imagine it can be made to someone with my very limited knowledge of mathematics (a math minor; some linear algebra and basic multivariate calculus years ago). If you’re curious about superstring physics but don’t have the maths background to read a very technical account, I’d recommend it.

That said, the book leaves me with two reflections on why the theory so fails to capture my interest and conviction—I highlight the above because it’s the theory, not the book. Bear in mind my limitations: Someone who knows more physics than I do might view things very differently.

The first reason is simply that the theory is extremely mathematical. I can qualitatively explain the reality of time dilation with nothing more complex than a stick to draw some lines in sand, and someone with vastly less mathematics than myself has no trouble grasping it. String theory isn’t like that; it can’t be discussed without going into higher-dimensional geometry and making reference to very abstruse realisations (I don’t know how many times that book used the term Calabi-Yau manifold—if you were curious, Wikipedia informs me that they are sometimes defined as compact Kähler manifolds whose canonical bundle is trivial, though many other similar but inequivalent definitions are sometimes used). Even when the discussion is clear, it’s littered with footnotes to help mathematically inclined readers actually get it. Since I’m not that mathematically inclined, I have a sour taste of ex cathedra in my mouth: I understand much more of what the theory claims than I did before I read the book, but I don’t understand nearly as much of the wherefores as I should like, and memorising facts is not what learning science is about.

Of course, that’s a consequence of my own limitations as much as it is of the theory. There’s no reason why the laws of the universe should be constrained to my comprehension, even if I do like the idea, variously attributed to Feynman and Einstein, that if you can’t explain it to a six year old, you don’t really understand it.

The second objection I have is that I feel, as I have long felt, that string theory is oversold. Oh, it may very well be the theory with the greatest potential to explain reality we have ever known—but we don’t know whether it is. True, it can be made to generate predictions that match what we know from traditional point-particle quantum mechanics, but that’s post hoc and therefore vastly less impressive. Of all the horribly abstruse mathematical theories of physics anyone could possibly think up, it’s obvious that only ones that agree with known facts will be kept around; but that doesn’t tell us whether they are correct in areas where they don’t just tell us what we already know.

String theory is the theology of physics, in a somewhat narrow sense: Like theology, it’s a lofty framework with many grand implications; like theology, we just don’t have any evidence that it’s true. Of course I do not think that they are equally credible. Since most of the world’s most brilliant theoretical physicists seem to feel fairly confident about it, and since they obviously know more about it than I do, on top of being much smarter than I am, and being in a profession where checking your results against nature is the highest goal, it’s probably true. But I am not prepared to go out and say that it is true until it’s generated some honest-to-god falsifiable predictions (pun intended). And from all that I have heard, and all that the book has taught me, I’m still not excited: There are some fairly out-there things that string theory predicts might happen (and if we see them, it’s almost certainly correct), but then again they might (so if we don’t see them, we still can’t discard it). This, again, brings theology to mind.

If any string physicist wanted to impress me, he should come up with a falsifiable prediction—If string theory is correct, we should see this; if we don’t find that result, then string theory is wrong, and of course it would have to be a result we can check by experiment, not just yet another agreement with existing theory. Ideally, we should then perform the experiment, which is why sentences starting with If we could build a particle accelerator the size of our solar system… don’t impress me, either.

I understand why it is referred to as a ‘theory’—it’s too complex and comprehensive a framework to be accurately summarised as a single hypothesis. But I also find it problematic, or at least annoying, in that we usually reserve the term ‘scientific theory’ for frameworks that are supported by falsifiable evidence. Thus far, string theory is not. It’s an extremely impressive edifice, and it may well be a tower that takes us to the stars, but it might yet turn out to be built on sand.

haggholm: (Default)

“Orac” over at Respectful Insolence has a writing style that’s fairly prone to offend—definitely pugnacious, and very fond of side swipes at those he dislikes (primarily alternative medicine quacks)—and I don’t blame him for his distaste, which in fact I share, but it does sometimes make his essays a bit harder to slog through. (He also has an inordinate fondness for beginning sentences with Indeed. This is one area where I can tentatively claim superiority: I can also be pugnacious and come off as offensive, but while I am no less prone than Orac to complicated sentence structure, I’ve never been accused of any such repetitive verbal tic.)

However, those foibles aside, he has written some very good stuff (he’s on my list of blogs I ready daily for a reason), and this article, summarising and explaining the work of a John Ioannidis, was very interesting indeed. The claim it looks at is a very interesting and puzzling one: Given a set of published clinical studies reporting positive outcomes, all with a confidence interval of 95%, we should expect more than 5% to give wrong results; and, furthermore, studies of phenomena with low prior probability are more likely to give false positives than studies where the prior probabilities are high. He has often cited this result as a reason why we should be even more skeptical of trials of quackery like homeopathy than the confidence intervals and study powers suggest, but I have to confess I never quite understood it.

I would suggest that you go read the article (or this take, referenced therein), but at the risk of being silly in summarising what is essentially a summary to begin with…here’s the issue, along with some prefatory matter for the non-statisticians:

A Type I error is a false positive: We seem to see an effect where there is no effect, simply due to random chance. This sort of thing does happen. Knowing how dice work, I may hypothesise that if you throw a pair of dice, you are not likely to throw two sixes, but one time out of every 36 (¹/₆×¹/₆), you will. I can confidently predict that you won’t roll double sixes twice in a row, but about one time in 1,296, you will. Any time we perform any experiment, we may get this sort of effect, so a statistical test, such as a medical trial, has a confidence level, where a confidence level of 95% means there’s a 5% chance of a Type I error.

There’s also a Type II error, or false negative, where the hypothesis is true but the results just aren’t borne out on this occasion. To the best of my knowledge, there is no equivalent of the confidence level for Type II errors.

This latter observation is a bit problematic, and leads into what Ioannidis observed:

Suppose there are 1000 possible hypotheses to be tested. There are an infinite number of false hypotheses about the world and only a finite number of true hypotheses so we should expect that most hypotheses are false. Let us assume that of every 1000 hypotheses 200 are true and 800 false.

It is inevitable in a statistical study that some false hypotheses are accepted as true. In fact, standard statistical practice [i.e. using a confidence level of 95%] guarantees that at least 5% of false hypotheses are accepted as true. Thus, out of the 800 false hypotheses 40 will be accepted as "true," i.e. statistically significant.

It is also inevitable in a statistical study that we will fail to accept some true hypotheses (Yes, I do know that a proper statistician would say "fail to reject the null when the null is in fact false," but that is ugly). It's hard to say what the probability is of not finding evidence for a true hypothesis because it depends on a variety of factors such as the sample size but let's say that of every 200 true hypotheses we will correctly identify 120 or 60%. Putting this together we find that of every 160 (120+40) hypotheses for which there is statistically significant evidence only 120 will in fact be true or a rate of 75% true.

Did you see that magic? Our confidence interval was 95%, no statistics were abused, no mistakes were made (beyond the ones falling into that 5% gap, which we accounted for), and yet we were only 75% correct.

The root of the problem is, of course, the ubiquitous problem of publication bias: Researchers like to publish, and people like to read so journals like to print, positive outcome studies rather than negative ones, because a journal detailing a long list of ideas that turned out to be wrong isn’t very exciting. The problem is, obviously, that published studies are therefore biased in favour of positive outcomes. (If not, all 800 studies of false hypotheses would have been published and the problem would disappear.)

Definition time again: A prior probability is essentially a plausibility measure before we run an experiment. Plausibility sounds very vague and subjective, but can be pretty concrete. If I know that it rains on (say) 50% of all winter days in Vancouver, I can get up in the morning and assign a prior probability of 50% to the hypothesis that it’s raining. (I can then run experiments, e.g. by looking out a window, and modify my assessment based on new evidence to come up with a posterior probability.)

Now we can go on to look at why Orac is so fond of holding hypotheses with low prior probabilities to higher standards. It’s pretty simple, really: Recall that the reason why we ended up with so many false positives above—the reason why false positives were such a large proportion of the published results—is because there were more false hypotheses than true hypotheses. The more conservative we are in generating hypotheses, the less outrageous we make them, the more likely we are to be correct, and the fewer false hypotheses we will have (in relation to true hypotheses). Put slightly differently, we’re more likely to be right in medical diagnoses if we go by current evidence and practice than if we make wild guesses.

Now we see that modalities with very low prior probability, such as ones with no plausible mechanism, should be regarded as more suspect. Recall that above, we started out with 800 false hypotheses (out of 1000 total hypotheses), ended up accepting 5% = 40 of them, and that

It's hard to say what the probability is of not finding evidence for a true hypothesis because it depends on a variety of factors such as the sample size but let's say that of every 200 true hypotheses we will correctly identify 120 or 60%. Putting this together we find that of every 160 (120+40) hypotheses for which there is statistically significant evidence only 120 will in fact be true or a rate of 75% true.

That is, the proportion of true hypotheses to false hypotheses affects the accuracy of our answer. This is very easy to see—let’s suppose that only half of the hypotheses were false; now we accept 5% of 500, that is 25 false studies, and keeping the same proportions,

…Let's say that of every 200 500 true hypotheses we will correctly identify 120 300 or 60%. Putting this together we find that of every 160 (120+40) 325 (300+25) hypotheses for which there is statistically significant evidence only 120 300 will in fact be true or a rate of 75% 92% true.

We’re still short of that 95% measure, but we’re way better than the original 75%, simply by making more plausible guesses (within each study, we were still equally likely to make either Type I or Type II errors). The less plausible an idea is, the higher the proportion of false hypotheses will be out of all the hypotheses the idea generates: A true/false ratio. Wild or vague ideas (homeopathy, reiki, …) are very likely to generate false hypotheses along with any true ones they might conceivably generate. More conventional ideas will tend to generate a higher proportion of true hypotheses—if we know from long experience that Aspirin relieves pain, it’s very likely that a similar drug does likewise.

This is not to say that no wild ideas are ever right. Of course they sometimes are (though of course they usually aren’t). What it does mean is that not only should we be skeptical and demand evidence for them, there are sound statistical reasons to set the bar of evidence even higher for implausible than for plausible modalities.

It is also a good argument for the move away from strict EBM (evidence-based medicine) to SBM (science-based medicine) where things like prior probability are taken into account. Accepting 95% double-blind trials at face value isn’t good enough.

haggholm: (Default)

Skeptical blogger Ziztur has a little project called Ray a Day wherein she (with the assistance of her boyfriend) go through one question a day from Ray Comfort’s book, You Can Lead an Atheist to Evidence, But You Can’t Make Him Think: Answers to Questions from Angry Skeptics. The other week, she circulated an email among her awesome commenters, in which number I have been granted the honour of inclusion.

Here’s my contribution, wherein I address Ray’s “answer” to a question challenging the existence of Hell, and muse a bit on the motivation underlying such unapologetic apologetics. Go read it, then read the rest of Ziztur’s blog.

haggholm: (Default)

Although I’m currently reading Don Quixote, I needed a book this weekend that was physically small enough to fit in my coat pocket, so I have re-read Crichton’s Jurassic Park. Much more involved in scientific detail and discussion than the movie, I always enjoy reading it: It has people being eaten by dinosaurs and it’s thought-provoking; what’s not to love?

There is another side to it, however. To a much greater degree than the movie, partly in its description of the book’s scientists like Dr. Wu, and partly in the commentary by Ian Malcolm, it acts as what I can most charitably describe as a ‘cautionary tale’. Less charitably, it comes off as anti-scientific: Dr. Malcolm explicitly describes science as an outdated and doomed mode of thinking, and the turn of the story gives the impression that his is the message that Chrichton wanted to deliver¹. (From what I have heard of some of his other books, that’s not uncharacteristic of Crichton’s thinking and writing.) As a great fan of reason and scientific thinking, I don’t much like that message, but on the other hand (and for the same reason), it would be irresponsible of me not to at least think about it.

The gist of the story is of course extremely simple: Scientists attempt to play god, things go terribly wrong, they are terribly surprised, and bad things happen. In such brief terms I can hardly fault it. Scientists are just people, after all, with varying motivations; and they work for people who also have varying motivations. Sometimes things go very wrong, and with the power of what Crichton has Dr. Malcom refer to as inherited power, the consequences can today be very far-reaching indeed. But I take exception to this message on two points.

One is that the message presented is that scientific thinking leads to disaster, but the message delivered by the actual plot of the book is that science in the hands of irresponsible con men (like Hammond) and immoral businessmen leads to disaster. That’s fair enough and a point both more nuanced and more true: Science has given us great power for good or for evil, and we must wield it responsibly—but science itself, and scientific inquiry and knowledge, is no more moral or immoral than any other tool. A hammer can save lives in helping to build shelter, or it can be wielded as a lethal weapon, but the hammer itself is neither good nor evil: It’s just a tool. Just like a hammer, science has no inherent morality, and does not make claims to it. Science is a tool for finding the truth about reality—ultimately, what the scientific method is, is an astonishingly wonderful way of arriving at accurate claims of the form If X, then Y. I don’t mean to say that it has no applicability to moral decisions. On the contrary, it is essential, in that we cannot make good moral decisions unless we attempt to predict the consequences of our actions. But ultimately, science tells us what’s likely to happen given certain events and conditions. It doesn’t tell us what should happen. We always have to establish some arbitrary, subjective ground case, a premise for our reasoning—Human happiness is good, It is better to be truthful than to lie, All people are morally equal unless they forfeit that standing by working evil. This is not novel, and no scientist or scientifically inclined person I know has to my knowledge claimed otherwise. This is why we claim to be things like humanists, which does establish a moral basis.

My second objection is that while Crichton appears to rail against science as an endeavour, but the novel serves as a critique against bad science, science executed naïvely. This is of course a fair point, too, but again the message preached and the message delivered don’t seem to be the same. When Dr. Malcolm vehemently insists that the Jurassic Park endeavour is doomed to failure due to the unpredictability inherent in biological systems and predicted by chaos theory, he is leaning on scientific findings (referring, though not by name, to Edward Lorenz’s discovery in attempted weather prediction). But the failing to consider this is not an inherent problem of scientific thinking at all; it’s a failure to use scientific thinking more sophisticated than a Newtonic clockwork universe². We know about chaos theory; we know about intractable problems; we certainly know about error bars.

As this was percolating in the back of my mind, I came across an interesting post by Erin with an even more interesting comment to follow: In brief, pointing out the complexity and difficulty of addressing so apparently simple a question as What is the proportion of land (times their potential yield) suitable for growing crops, as compared to land suitable only for raising animals? —Which I asked, ignorant of the ill-posed nature of the question itself, and how many variables I had failed to control just in asking a question. Here’s a real-world example of a question with every implication of the unpredictability of nature, not to mention ecological consequences (how humanity does agriculture is clearly very significant). (It comes from an unsurprising source: Erin seems to have a knack for raising questions that strike me as good, interesting, and challenging without ever wandering into wingnut territory. It is a valuable knack to have.)

But this unpredictability, the difficulty in even asking the right questions, the limitations of our ability to predict the behaviour of chaotic systems when we cannot measure the variables sufficiently closely or often—none of this means that science is therefore any worse a mode of thinking; on the contrary, science done right comes with error bars and attempts to find its own limitations; a hunger for the unanswered questions. A scientist is never someone who thinks that everything worth knowing is already known—if it were, every scientist on Earth would be out of a job. A scientist is someone who wants to identify areas of ignorance, try to ask the right questions, and find the answers.

And—as always when someone criticises science as a mode of thinking—what else should we rely on? It is true that science today can’t tell us what will happen if we clone dinosaurs, or if we adopt this or that method of agriculture, on a large scale and in the long run—but no other form of knowledge acquisition can even begin to help us, and we cannot go back. (Anyone who thinks that only modern, large-scale agriculture has the potential to harm our environment needs to read Jared Diamond and stop blindly romanticising the past.) Don’t criticise the tool: It is the best, the only tool we have for addressing those issues to which it is applicable. If you wish to leverage criticism, criticise those who use the tool wrongly, who approach it with the wrong presuppositions or premises, who apply it naïvely.

The notion that science as a way of thinking is doomed and a failure strikes me as nothing less than ridiculous and absurd. We don’t need a replacement for it, and whether we want or no, it is too late to go back now. What we may need is a value system and a socio-cultural framework better able to handle it.


¹ I will continue to treat this as if the character of Dr. Malcolm expresses Crichton’s opinions. I don’t claim to know for sure whether he does. At any rate, this is the message that was communicated to me, whether intended or not, and it is the message I want to address.

² Within the context of the novel, it’s also the result of an awful approach to security. Any computer programmer worth his salt knows that you should never assume that things go wrong, and look for specific, predicted problems; rather, you should assume that things will go wrong, and compare input to specific, predicted, correct values or formats. To name a detail, if the programmers in the novel hadn’t made the very foolish decision of assuming that the number of animals found would ever exceed a given number and stop counting at that point, the unexpected breeding would have been found out.

haggholm: (Default)

As someone who values the scientific method and philosophy of logical inferences and dedictions, empirical observation, Occam’s Razor, and so forth with a rather long list of highly valued principles; as someone who thinks that few approaches to finding truth can compete with experiments in physics, or double (or triple!) blinded medical experiments (when evaluated as Science Based Medicine [SBM] rather than Evidence Based Medicine [EBM], with the former’s greater understanding and acknowledgement of prior probabilities, etc.)—

—As a naturalist, materialist skeptic, in other words, one of the most irritating and most pernicious cognitive traps is that of scientific-sounding rationalisation, that isn’t science at all. I daresay it’s something several people I have talked to (or do talk to) would accuse me of, had they happened to express themselves using my exact vocabulary (though it isn’t their entire beef). I do acknowledge that it’s real; I do not claim to be immune to it; I do my best to guard myself against it. …But what am I actually talking about? What is the basis for my worldview, what are the common errors and how do I try to guard against them; what are the inherent weaknesses, and how to I justify adhering to this mode of thinking in spite of them?

What science is

Science is a process of finding the truth¹ by

  1. Making observations (gathering data);
  2. Constructing a model (forming a hypothesis);
  3. Making predictions based on the model;
  4. Verifying the predictions, and throwing out the model if it’s wrong.
Of course that’s a very rough generalisation, but that’s the basic idea: You need data to construct a model; you need a model to generate predictions; and if you don’t have any verified predictions (which means they must be falsifiable! —it’s not verified unless you leave room for failure if the model’s wrong), it isn’t science.

Scientific explanations aren’t like that, though. A scientific explanation of an already-observed phenomenon is not science; it’s just based on it. It cannot be science: In order for it to be science, I need to construct a model and check that it’s a good model. We don’t always do this when we explain something scientifically. Instead, a scientific explanation is an explanation of some phenomenon or occurrence based on what science has shown is feasible. If you show me an example of purported levitation, or a UFO sighting, or similar, and I explain (very reasonably and probably correctly) that there are natural explanations for what I’ve seen, I’m just observing that it fits the current scientific models.

Post hoc rationalisation

The real problem, here, enters the picture when we try to construct models and treat them as scientific models relying completely on post hoc data—and the difference can be a subtle one. The problem is that it is impossible, without generating and testing predictions, to know whether the model is actually correct, or merely happens to apply to the data at hand—which may be woefully incomplete, subject to natural selection bias or (more sinisterly) to cherrypicking.

This is a problem not just in discussions where people attempt to sound scientific, but also historically with science itself—very notably before the modern scientific method was formulated. People who don’t trust the scientific way of finding the truth are often fond of pointing out egregious pseudosciences that were, in their own time, respected and considered as scientific as anything else. Whether the balance of humours, the theory of miasms, the notion of the luminiferous æther, phrenology, or any other such now-discredited concept (I almost said homeopathy, but it never really did have that sort of credibility), they once did serve as models to explain all the data compiled into their construction—but all of them were wrong, and were able to survive because they were not used to generate and test falsifiable predictions. …Survive for a while, that is: By now all of the above are discredited because we do know better; and sometimes, to their credit, it is the proponents of ideas that discover and publish the fact that the ideas are wrong; as with Michelson and Morley and their famous æther experiment.

Let me be very clear and explicit in saying that if you cannot generate and test predictions, it is never science in the true sense, however reasonable your explanation. (Note that prediction refers to a prediction of what we will find—in historical and palæontological sciences, the events happened long ago, but we can still make predictions about what new data will discover; hence that famous prediction in palæontology that we will never find a Cambrian rabbit fossil.) This is why interpretations of quantum physics are metaphysical rather than physical, since they do not make different predictions. (On a not unrelated observation, I despise the term string theory—it is string hypothesis until predictions are tested!)

Equivocation

As with any profession or culture, science is full of very technical jargon. Many of the most awful abuses of science I have come across have been straightforward instances of equivocation. The most infamous is, of course, the creationist’s claim that evolution is just a theory, apparently unaware that an explanatory framework in science can have no higher title. (Some would have it that if evolution were proven, it would be the law of evolution, but that’s just not true: Explanatory frameworks are never elevated to laws, no matter how solid. To quote Stephen Jay Gould,

Evolution is a theory. It is also a fact. And facts and theories are different things, not rungs in a hierarchy of increasing certainty. Facts are the world’s data. Theories are structures of ideas that explain and interpret facts. Facts do not go away when scientists debate rival theories to explain them. Einstein’s theory of gravitation replaced Newton’s, but apples did not suspend themselves in mid-air, pending the outcome. And humans evolved from ape-like ancestors whether they did so by Darwin's proposed mechanism or by some other yet to be discovered.

Another favourite term to pervert and subvert is that of energy, which quantum quacks gleefully use as though I feel full of energy were related by an equation to E=mc²—when in fact energy in physics is a strictly and technically defined term, related to the colloquial word by etymology and analogy, but no more. Force suffers similar abuses.

Now, I’ve used pretty obvious examples, but equivocation can actually be pretty difficult to detect when used skilfully, and it can lead to genuine misunderstanding. The equivocater’s trade lies in

  1. equivocating using such language that the equivocation is not obvious (i.e. it looks as though the technical sense might apply on both sides of the equation); and/or
  2. couching the equivocation in a discourse sufficiently technical that the reader (or listener) just isn’t qualified to tell them apart.
The latter is particularly pernicious because the same word may mean different things in different contexts, or have different definitions in the jargons of different fields. Apart from education (which sounds like such a good idea, but really, you can’t be well educated in everything) and listening very carefully, about the only way I know to avoid this sort of thing is to make sure that anyone who claims that two things are mathematically related can actually show the form of the equation.

Non sequiturs, red herrings, and other fishy things

Another problem, less subtle than true equivocation and more easily countered, but nonetheless a cause for vigilance and concern, enters the picture when the model is (or may be) complete, but the relevance hasn’t been established. One example I see very frequently indeed is in the context of the martial arts message boards I frequent, where every so often someone will attempt to validate his (usually arcane) form of martial arts as having the superior form of punching by using not fight records (showing that his stylists can beat up other stylists) but physical equations—attempting to show, for instance, that this method really does maximise the momentum transfer of a punch. Maybe so; maybe not. The problem here is that it was never shown that maximising the momentum transfer is really what makes a punch most powerful; the next person with the next martial art will instead present some half-arsed equations showing that he can maximise force—or impulse, or velocity, accoleration, jerk, pressure, or other physical property. It may be some time before anyone steps back from trying to disprove the equations to realise that it’s a non sequitur.

In short, one must not fall victim to the belief that just because someone backs up a claim with a scientific-sounding argument, that argument necessarily supports the claim. It is a requirement of a valid logical argument that the conclusion necessarily follows from the premises—but this has to be established; it cannot simply be assumed.

In defence of Occam’s Razor

Occam’s Razor is the philosophical principle that out of any set of explanations, the simplest is always to be preferred:

Numquam ponenda est pluralitas sine necessitate.
(Plurality must never be posited without necessity.)

Frustra fit per plura quod potest fieri per pauciora.
(It is futile to do with more things that which can be done with fewer.)

William of Ockham

My own favourite formulation runs somewhat as follows:

Given two explanations of a phenomenon, all else being equal, choose the one that requires the fewest assumptions.

This always looks like a suspect—why should we prefer the simpler explanation? Aren’t explanations of things sometimes genuinely and necessarily complex? How on Earth can you justify basing a depiction of reality on something so vague and susceptible to error?

The answer to this is twofold. First, note my inclusion of all else being equal—of course if we have two explanations for a set of phenomena, both of which generate testable predictions, and one of them produces significantly better fits to newly observed data, that’s a pretty good case to support this one even if it’s more complex. Occam’s Razor isn’t to be used as an excuse to throw out superior models.

Secondly, it is an unfortunate fact that for any phenomenon, you can generate an infinite number of explanations by making them increasingly complex. A simple example is found in every statistics class, where we find that anytime we try to fit a polynomial curve to a set of data points, we can always achieve an equally good or better fit by increasing the order of the polynomial (making our explanation more complex). In fact, more complex explanations are here usually better fits to the data, because the data have measuring errors and so don’t fit exactly to the predicted curve of the right explanation, whereas a very high-order (complex) polynomial can jitter up and down to match every blip in the data.

It should be pretty clear that we can’t get very far if we attempt to maintain an infinite number of explanations for everything we try to explain. Instead, we choose the simplest explanation that works and resort to more complicated ones only if it turns out that the predictions we get from our simpler models aren’t up to snuff. Of course, this process is empirical—we may find out only in time that our model has shortcomings. Unfortunately, it’s the only way we have to proceed; it would be nice if we had some more absolute way of finding out truth, but we don’t. (This is why it is often said of science, properly done, that it tends to approach the truth asymptotically—it can never reach an unassailable, absolute position of Truth, but every subsequent model, because it has to account for all the data that the old model did explain, as well as the data that we’ve found the old model fails to explain, is a better model than the last.)

A more colloquial sort of defence of Occam’s Razor is to observe that it is the principle whereby we explain the world in terms of things we know to be possible, rather than positing arbitrarily things that we don’t know have even a chance of being true. In an example that I freely acknowledge to be something of a reductio ad absurdam, if I leave some chocolate on the table next to a child, turn around for a minute, then return to find the chocolate gone, and the child claims that he did not eat it, but rather that the chocolate was teleported away by aliens for scientific study, I would be an idiot to believe him—not because it is absolutely impossible, but because one explanation (he’s lying) rests on observably possible things (children sometimes eat chocolate; children sometimes lie), whereas the other postulates something (the existence of aliens with teleportation technology) for which there is no evidence.

Jumping from that insultingly trivial example to something that actually is a matter of debate, some people claim that the human mind cannot be the product of the brain alone, but must also rely on something called a soul, or other immaterial and scientifically undetectable entity that is not a product of material causes. Here, the naturalistic explanation is a lot less obvious—we cannot at present show how the mind results from the brain. Some would also argue that the alternative explanation (there is a soul) is less absurd than the alien hypothesis (though I would disagree).

The difference, however, is purely quantitative. We know that brains and minds are very intimately connected—we can observe the mental and psychological effects of damage to the material brain, artificial or natural alterations of brain chemistry, et cetera; we can scan activity in the physical brain correlated with thoughts and emotions; we can view the material development of the brain from embryo to developed organ. The assumption in the naturalist model is just this: That in a complicated neural network consisting of a hundred billion neurones, with feedforward and feedback loops, and modulated and assisted by glial tissue and chemical catalysts in the form of neurochemicals; in such a network that has, furthermore, evolved by fairly well-understood principles of natural and sexual selection, sufficient complexity has arisen to explain the minds that we now experience.

The soul hypothesis may look simpler because it can be summarised more briefly, but its assumptions are actually huge. It postulates the existance of something for which we have no evidence whatsoever—the materialistic hypothesis is complex, but it rests on established facts. Furthermore, it postulates that this intangible soul—which no scientific instrument has been able to detect—is yet able to exert causal effects on the material brain, since it is very clearly established that it is in the brain that our motor impulses originate. (René Descartes thought that the soul operated on the brain through the pineal gland.)

It also suffers the difficulty that if we be allowed this one, completely unfounded assumption (there exists a soul), it is difficult to see why we should disallow any unfounded assumption as a valid rival, so long as it is not falsifiable; for instance, I might equally well claim that your brain is run by a computer program (written by aliens, or the NSA, or the Illuminati); that you are mind controlled by the White Mice… These sound more ridiculous to us, but that is a cultural artefact. The Soul Hypothesis and the White Mice Hypothesis have an equal basis in evidence.

I daresay that William of Ockham, a Franciscan friar, would disagree with my conclusion; but while I think that his beliefs were most likely pretty absurd, I nevertheless think that his principle of parsimony is a necessary part of a rational worldview.

Occam’s Razor or rationalisation?

When I get involved in discussions or debates questioning this worldview and philosophy of mine, and (this is, alas, a limited subset) when the person I am conversing with is neither stupid nor patently insane, one of the most common and most reasonable objections runs something like this (in spirit; I reformulate rather than paraphrase):

What you say generally makes sense, but because the philosophy ultimately relies on empiricism in determining what’s real and what’s not, it is—as you acknowledge—imperfect, and unless you end up clinging dogmatically to your skeptical beliefs (contrary to your own philosophy) you are bound to change your mind on things as new evidence emerges.

Well, given that, by what right do you reject (for instance) the soul concept so strongly? You may think that there is no evidence, but evidence may come up, and you certainly have no strong counterevidence. Why not just keep an open mind?

…As, of course, I should, but as the saying goes, You should keep an open mind to new ideas, but not so open that your brains fall out. It should be noted and emphasised that when I say There is no such thing as a soul, what I mean is To the best of my knowlege, and according to the best evidence and reasoning available, there is no reason to think that there exists such a thing as a soul. (Whether I am truly as open-minded as I should like to new evidence in the areas where my worldview is heavily invested is, of course, hard to say; but I’m defending the way I try to structure my thinking, not rating how well I measure up to my own ideals.)

Ultimately, I regard this as a quantitative rather than a qualitative distinction. It is not whether I am willing to credit the possibility of some improbable concept or not, but just how large or small a probability I am willing to credit it with. My own tendency and preference is to treat extremely improbable things (souls, færies, ghosts, celestial teapots, alien abductions, gods, Great Green Arkleseizures, effective reiki or homeopathic remedies, etc.) as so improbable that unless better evidence or arguments are presented, I will not bother with the possibilities of their reality at all. I will freely acknowledge that they are non-zero, in the absolute sense, but it’s such a small one (0.00…01%) that it’s not worth bothering with. This is, of course, where I often and easily come off as contentious or even contemptuous, and people whose approaches otherwise look similar to mine may differ in how small a prior probability we should assign these things. (If you want a gentler version, read the late, great Dr. Carl Sagan’s The Demon-haunted World.)

Of course, being but a human, I’m very likely to be wrong about a great many things, some of which may very well be of fundamental importance to my worldview. If we were to line up all the things I consider to be ludicrously improbable, I expect that, due to human fallibility and my own sometimes unfortunate tendency to over-swift rejection of apparently unscientific notions, I will be wrong about a number of them. But I consider the probability of my being wrong on any specific such matter to be very small. In the end it comes down to whether I would rather say Well, maybe to everything I come across, or whether I’m willing to say Yes and No and be prepared to eat my hat when the day comes when I turn out to be wrong. I’m willing to do the latter (against which occasion I cleverly do not wear a hat). To me, there’s no other sensible way to proceed. If I accept the soul hypothesis as likely enough that I shouldn’t reject it utterly out of hand, I cannot remain intellectually honest and consistent without accepting any number of other such claims—UFOs, homeopathy, reiki, unlucky black cats, astrology, and all the rest.

I would rather be forceful and intellectually honest and consistent, and occasionally be wrong, than either waffling about every incredible claim anyone cares to make, or arbitrarily accepting improbabilities based solely on their cultural or social acceptability.


¹ Some people hold that hunches and intuition are as good guides to truth as is scientific inquiry; to which, apart from obvious retorts (Would you rather travel on an airplane built according to scientific principles, or on one based on intuited ærodynamics?) I would like to reply that whenever a hunch is opposed to reason, I have a hunch that reason is right; so hunches cancel out and we’re left to trust to reason².

² Unless you belong to that school of thought that claims that there is no such thing as objective reality; that all reality is subjective. I have a vast dislike for such solipsism, but I suppose I can’t deny you your subjective reality; in my reality, however, things are objective and that school of thought has no value or standing.

haggholm: (Default)

The other day, someone told me that I treated science as a god replacement—in fact, two people who more or less know me had apparently been talking about me in those terms. I’m still trying to figure out whether that’s a good thing or not, apart from the blind faith connotations inherent in the religious terms (to my ears, at least).

The key is, of course, to nail down exactly what one means by science. I do not think that any scientific fact should ever be held as gospel—many are solid beyond the shadow of a doubt, but new facts, new lights, may cast new shadows. Obviously, I do not wish to deify any scientist. There are many scientists whose minds and accomplishments I admire (and while some, like Sir Isaac Newton, were awful human beings, others, like Charles Darwin, seem to have been very good people morally as well as intellectually), but they are or were human and fallible. Not only am I unconcerned by ad hominem arguments (Newton's being a right bastard doesn’t detract from his intellectual achievements, or make his theory of universal gravitation any less useful); I also am not bothered by any scientist being shown to have been wrong on any particular point.

This may not always be obvious in conversation and debate, of course. I’m rather fond of putting it so: When I am convinced that something is sufficiently probable that the margin of error, though never zero, is negligible for practical purposes—e.g. I am so convinced of the truths of gravity, electromagnetism, evolution, etc. that I see no reason to take the null hypotheses into consideration—I choose to say that I know it is so, because it is shorter, more forceful, and more pithy than I am convinced that the probability of it being so is so high as to render the null hypothesis negligibly small for practical considerations, even though the latter more accurately describes the stance I try to take.

There is also the lamentable fact that I’m stubborn and like to argue and have an all-too-human tendency to take a position with overtones of fiendish advocacy: If you take a position that I disagree with, I may for the sake of argument take an opposite position more extreme than I truly credit.

That aside, what’s left is not scientific fact, nor scientific practitioners, but science itself. Thus, either people who look at me and cry God replacement! misunderstand me and think that I deify scientists or scientific facts (which may be my fault as easily as it may be theirs), or it is science itself that is the issue at stake. It may well be the latter. But science is not a set of facts, or of people, though it needs these things: Science is a method and process of discovering truth based on logic, empirical observation, mathematics, statistics, probability, and naturalism. (This is not to say that science is by definition incapable of discovering supernatural phenomena—if any exist, they will be found precisely where science finds anomalies that cannot be accounted for under the assumption that these rules all hold.) It is the only method we have for discovering the real truth of the universe to any degree superior to our own brains, which, for all that they are marvellous pattern-recognising deduction machines, are also prone to finding false patterns, pareidolia, conflating correlation with causation, and many other errors that we should expect from an animal that pays a much more dire prize for false negatives than for false positives, and whose heuristics have limited powers.

I asked someone recently in a rather pointed way whether she felt that intuition was as good a guide to truth as scientific inquiry. She replied (you’re going to think I’m crazy, but…) that she does, because scientific models are always constrained by prevailing cultural and intellectual paradigms. In some areas of research, this may have a point, but I think it misses the main point entirely, because this is precisely what science is meant to avoid. If you can point out a way in which a study—any study—has a risk of being less than objective due to such biases, you have not poked a hole in the scientific method in general, but rather identified another bias for high-quality scientific inquiry to correct for—along with logical fallacies like conflating correlation with causation, confirmation bias, observer effects, the Hawthorne effect, regressions to the mean, placebo effects in medicine, and myriad other cognitive quirks that we have to isolate.

But the highest goal of science is to find objective truth, and I don’t care what you say—unless you subscribe to solipsism (in which case I won’t even bother with you), there are objectively verifiable and falsifiable tests. To paraphrase Richard Dawkins, aeroplanes built to scientific specifications fly, while (cargo-cult) aeroplanes built to religious specifications do not. Differentiating between flying and not flying is not subject to cultural bias, and nor are other good scientific facts—and the hard sciences abound with them. (Our differences in opinion may have been conflated artificially due to my background in the natural science and this conversation partner’s background in humanities and softer sciences—the observer’s bias is much more likely to influence observations of people’s behaviour than observations of the behaviour of atoms, I should think.)

And let’s face it—ultimately, you have to reduce reality to a set of objective truths unless you want to descend into solipsism. What I wish I had said (but thought of too late) might be something like this: Intuition cannot be trusted because it cannot be tested. You have a feeling that something is true; well, I have a feeling that it isn’t. Only objective testing can settle it; otherwise there are no facts, but only opinions. I might also point out that I have a feeling that your idea that intuition is as good as science is dead wrong

This, then, is what I think: Any scientific fact I hold up may be wrong (though some are fantastically unlikely to be so). Every scientist who ever lived, and every scientist who will ever live, is and will be fallible. But the scientific method is the only viable way of ascertaining truth beyond personal experience, and personal experience can be notoriously misleading; and any valid criticism of the scientific method as currently practiced will never tear it down, but only—at most—show how our current practice can improve.

I am open to challenges on this point.

I also feel—and this may well be controversial—that any field of inquiry that is irrevocably biased by cultural norms, etc., is not strictly scientific at all: I will not call it pseudo-science, but perhaps almost-science (or soft science…). This is not to take anything away from its practitioners: Strict science is ultimately the most accurate method for finding truth, but it is not therefore always the most practical (c.f. the old hypothetical example of the folly of having children empirically test their parents’ claims that the river teems with crocodiles). If your model purports anything beyond strict physical measurements, it has strayed from the field of hard science where this sort of reliability is possible.

haggholm: (Default)

Originally a post on ADD/ADHD in a forum thread elsewhere, I ended up getting sufficiently carried away in responding that I feel it’s worth reposting here.


As with many other disorders, particularly psychiatric disorders, I think a major problem is poor or lazy diagnosis, which may stem from any of a number of reasons.

I’ve had a diagnosis of IBS for the past several years, which had me resigned to a chronic problem and perhaps taking meds to ameliorate (not eliminate, let alone cure) the symptoms—because the first doctor I spoke to listened to my description of the symptoms and prescribed a drug on the Take this, see if it helps principle. Fair enough, it helped—only a few years later a better doctor (at a walk-in clinic!) listened to the same list of symptoms and decided to run a simple test. This resulted in the lab-confirmed diagnosis of a bacterial infection that took two 10-day rounds of antibiotics at a total cost to me of $20 to clear up. Thus the first doctor’s laziness or negligence in not ordering a simple test cost me several years of pain (and several hundred dollars in medication).

Psychiatric disorders frequently suffer from vague or non-existent diagnostic criteria, especially when it comes to prescribing medication. It’s rather like zoological classification before the discovery of DNA: Disorders are classified according to similar symptoms (visible criteria) rather than analysis of underlying causes. Whales used to be classified as fish, even though they’re most closely related to hippopotamuses; elephants are more closely related to rock hyraxes than to hippopotamuses or rhinoceroses, but you wouldn't know it to look at them. Medical diagnostic criteria, when diagnoses are made without lab tests, are similar. Prescriptions of antidepressants are made without actually looking at patients’ brain activity of neurochemistry; I myself take Cipralex/Escitalopram (originally Celexa/Citalopram)—the criterion? I was clearly depressed, so they started me on the drug with the least potential for side effects. I was lucky; it worked out great. It’s still a crappy criterion, and lots of people get burned. (I know some people severely disillusioned by antidepressants, and others for whom they work great, but whose lives were made severly more miserable during a period of months or years of experimenting to find the proper treatment.)

AD[H]D sounds like a disorder with very much the same problem, only moreso. It’s clearly a spectrum disorder, or a set of spectrum disorders, with enormous potential for diagnostic confusion. I do not doubt that there are real disorders there, nor that medication can help sufferers, nor that it is a very good thing that such medication exists and is available to them. But I think it is terrible that so much work goes into developing these medications with no work done or publicised in properly analysing and targeting treatments to people who really do need them. And either this is at least theoretically possible, or the entire model is wrong to begin with—after all, it is hypothesised (or theorised, or known; I’m no expert) that neurochemical imbalances are at fault; then we should be able to test for them.

In more crass terms, I wish the pharmaceutical business model would shift to make, proportionally, a little more money in precise diagnostics and lab tests (test kits cost money, too!) and a little less (again, proportionally) on treatments.

I also wish that doctors had stiffer backbones and simply said no to parents or patients demanding drugs for which there is clearly no good indication. This is surely a major problem with AD[H]D; it's also a problem with drugs like antibiotics, often prescribed for viral conditions where they are of no use whatsoever (save placebo; and it's better to prescribe inert placebo or vitamins) but select for resistant bacteria.

haggholm: (Default)

There is no terribly novel material here; chiefly I write this little essay in order to have a fixed location to refer back to when, in the course of some argument on other over on Bullshido.net, I have to reiterate what I have said before (considering the length of this, I think excerpts will be called for). (It should also be mentioned that much of my thinking on the subject has been very directly inspired by posters on that website.) That said, on to the meat.

A lengthy article… )
haggholm: (Default)

Prelude (this post isn't really about parrots): Because I like parrots, because I like to be prepared, because I am fond of knowledge, and because it is a way of living vicariously through strangers, I frequent a couple of webforums dedicated to discussing said birds. Because I am who I am, what I am, and the way I am, it's probably fair to say that a few of the discussions I enter end up less ‘discussions’ than ‘debates’. It's not just me, though—there are some definite hot-button issues, and one of them is crossbreeding different species (hybreeding as some people call it, probably more due to poor spelling than clever coining of terminology).

Think what you will of the pros and cons (or harbour no opinions at all), my basic stance is that (ignoring conservationist issues for the moment) the important consideration is whether it is cruel or kind to the animals. After all, if you breed pets, just as if you own pets, you owe it to the animals to make their captive existence a good one, and breeding unhealthy animals merely to satisfy human aesthetics reeks of unnecessary suffering to me—Persian cats, deaf Dalmatians, many dogs prone to hip dysplasia, whatever kills English Budgies younger than the wild genotype…the list could be made quite lengthy (but mindful of you, o humble reader, I shan't make it so).

Imagine my horror, then, upon encountering this ill-conceived sentiment:

All I can say is that God made humans and the animals. God is THE top engineer. If he didnt want it to mix he makes it genetically impossible (ie: humans and animals) He also made Man the keeper and caretaker of all the animals. ALL of them, he gave us brains to think, study and create.

Ignore the spelling and grammar for now. Ignore also the religious sentiments, because for the purposes of this discussion (and of that discussion), it doesn't matter whether life evolved naturally or under the guidance of some magical entity that wanted it this way. The simple and obvious fact of the matter is that either way, life is full of very nasty things, from anthrax spores to intestinal parasites, birth defects and cancer; the naturalistic fallacy gains nothing from being dressed up in holy robes. Of course, I spelled this out—and of course, the poster in question seems quite immune to reason, but I chiefly wrote it in the hope that someone less committed to fallacy will be swayed by it.

What I didn't say over there, because it would be a little too inflammatory and distract from the real subject (what was important there), is what disturbed me most about the cited sentiment: It is a total abdication of moral responsibility. The individual I quoted breeds animals, and takes the moral position that it is physically impossible for her to do wrong in so doing—the corollary is presumably that she therefore doesn't need to think about whether it's good or bad, because—it's impossible for it to be bad. (If you sense a minute earthquake just as you read these words, I suppose I am wrong and there is an afterlife—that'll be Ayn Rand, spinning in her grave.)

This crystallises exactly what I despise about even the non-violent manifestations of dogmatic religion. God wills it—case closed. No need for such a believer to exercise judgement, no need to carefully consider your actions, no need to analyse your own behaviour and check your motivations—most of all, no need to check whether your motivations ultimately result in harmful consequences! After all, God makes it impossible to do wrong.

It is entirely likely that this person will never cause any greater evil as a consequence of this version of faith than cross-breeding two birds of too-far diverged genomes and create a generation or two of genetically unfortunate animals; it's quite likely that even this won't happen (this is a breeder of birds but not necessarily a breeder of hybrids, and hybridisation isn't necessarily a bad thing). But consider the morality of a person like this! Couple it with the very best intentions and the very greatest kindness, and such a person may well commit atrocities, because if you hold a deeply-felt belief that it is just not necessary to consider the consequences of your actions, you have no idea what evils you may do. You have blind-folded yourself to them.

In that famous paraphrasing of Voltaire,

Those who can make you believe absurdities, can make you commit atrocities.

haggholm: (Default)

I'm think of myself as a rationalist, I might say. To the best of my ability, I use reason to deduce what the world is like, and I do my utmost to avoid basing any beliefs on the irrational.

But, you might say, and already I am taking a deep breath, But surely you aren't always rational! So many thoughts and feelings and emotions and opinions…there's no rational justification for why you think a flower is pretty or a kitten is cute¹; even you, o staunch self-declared Champion of Reason!—Are not even you irrational in your beliefs about subjective things?

At this point I might take a step back and slowly count to ten, but that's my problem; some things just annoy me very easily. Now that we're all calm, let's have another look at the above argument—probably not so very different from something you have heard, or perhaps even said?—and tear it to shreds.

It's really very easy and won't take a lot of time or space. There is no such thing as a subjective belief, or an irrational opinion, or for that matter an objective opinion or subjective fact. Beliefs and opinions are positions, but they are positions on different issues: A belief is a position on an objective issue, where there is a right and wrong answer, and where facts and evidence can be used and stringed together by rational arguments. An opinion, meanwhile, is a position on a subjective issue, which is by definition an issue on which there is not a right and wrong position. (Read this if you disagree.) Anyone but a small child realises that if I say It's a fact that kittens are cute it's a nonsensical statement (though see below); it is my opinion that kittens are cute. If I opine that (say) kittens are cuter than puppies, while you think that puppies are cuter than kittens, there's no such thing as right or wrong. We may reasonably agree to disagree, or unreasonably refuse to, but there's no such thing as proving the other wrong.

Of course, it remains true that opinions may be based on facts (more precisely, beliefs about what the facts are). If your opinion that puppies are not cute is based on having only ever seen the puppies of the Hairless Chinese Crested dog, then some new facts may cause you to re-evaluate your position on subjective matters. Or it may not. Perhaps you just don't find puppies cute. The point is, beliefs and opinions are intertwined, but there is a fundamental difference, and it is good to be aware of it.

The answer to the hypothetical (but, alas, not very hypothetical) argument at the beginning of this little essay, then, is this: I strive to be rational rather than irrational in matters of fact and belief; in matters of the purely subjective, rationality and irrationality have no meaning; these are un-rational things, orthogonal to logic, and insofar as your question makes any sense at all, the answer is no: I am not irrational when I take up arbitrary positions on subjective issues, because there is no rationality to defy.

Now you may protest that I'm only arguing semantics, so I'll dissect two pet peeves for the price of one and point out that semantics refer to the meaning of the elements of language (words, sentences, etc.), and to the study thereof, and if you really mean to protest that You're only arguing about the meaning of what I said!—then, well, yes, and I don't really know how to tackle your argument without considering its meaning through the semantics of its presentation.

¹ Actually, I gather there's a sound evolutionary reason for why we find puppies and kittens cute: They share those infantile traits which we have evolved to find adorable in order to trigger our nurturing instincts: Proportionally large heads, small snouts or noses, large eyes, and so on. These features are common to many (most? all? placental?) mammals and so probably go back a long way in our evolutionary history. I'm sure you still get my point, though.

Faith

May. 23rd, 2007 03:31 pm
haggholm: (Default)

Written about a month ago when I got involved in a debate forum, which I subsequently abandoned because it inspired me to write this.

One of the psychological phenomena most puzzling to me is the one called faith. I do not mean in the secular sense in which the word is sometimes employed (I have faith in your ability to do this), but the religious sense, which I define as belief without evidence. It may be that some theist reading this takes offence at that statement, but I have never once seen the word employed in a context where evidence is available. Rather, You must have faith! is a statement frequently used or resorted to when no evidence exists (or when purported evidence is overthrown).

Let me reiterate this, because it's important and easily leads to a debate about semantics—well, semantics (the meaning of words) do matter, so let's nail them down for the purpose of this debate! If you tell me (actual example of what I've been told) You have faith that the building you are in will not collapse, I will assert that it is not the same thing. This is faith based on reason and experience. There are thousands of similar buildings around that do not spontaneously collapse, and there is no reason to believe that this one differs in a crucial way from those. It has stood for a long time and shows no sign of structural damage. Architects have staked their reputations and livelihood on the safety margins, and engineers and construction workers staked their lives in working on it. In other words, there is plenty of evidence that the building won't collapse. You can call this faith if you like; I call it reasonable belief and use the word faith to refer to belief not based on such tangible evidence. Like this definition or not, please keep it in mind as you read on.

So what's this religious faith about, then? It seems to be about believing what you have been told without being given any specific reason to believe that it is true. It may take the form of believing everything you are told by your pastor, rabbi, yogic guru, or imam. It may consist in considering the Bible, the Qur'an, the Veda scrolls, or the Elder Edda inerrant. Strangely, it may sometimes consist in taking one of these scriptures—the Bible, say—and believing some of the things it says based on no other evidence, whilst discarding other bits (generally ones that offend the believer's moral sensibilities). It seems to me that this is based on an a priori assumption that the scripture as a whole is true, and each statement should be held true unless proven false; whereas a rationalist world view (one to which I adhere) demands that we consider every claim suspect unless some evidence can be shown to support it.

Sometimes excuses are offered up to this; the most recent, the lamest, and the most amusing that I have heard to date it this, to paraphrase: The Bible contains scientific accuracies. It's hard to believe that's not a joke, is it not? Pretty much every book in existence makes some mention of things that are scientifically verifiable—the Bible, the Qur'an, the Illuminatus! trilogy, and even, I expect Mein Kampf; this in no way lends credibility to their general contents. I could take any load of nonsense and insert some facts.

But most of the time it seems to come down to…well, to nothing at all, really; just blind faith without any kind of rational, evidentiary, or logical support.

And people use the word faith as though it had positive connotations!

There is another word that describes the same phenomenon, and one which, although its meaning (within the context being discussed) appears closely related to faith, has very different connotations; that word is credulity—though in a very contextual form. I'm sure that many readers (or at least `many' relative to the total size of my readership…) will consider this an offensive statement when applied to religion. Oddly, the same is probably not true with respect to any other topic. Consider a text that is some two thousand years old, and consider a person who, although he has no corroborating evidence for its claims, believes whole-heartedly in it and will allow nothing to change his mind. If it is a text on astronomy, or anatomy, or physics (on an earthly scale), I am sure we will all agree that he is just plain wrong-headed. If it is a text on religion—on the origins of life—on the nature of the universe on a grander scale—why, then, nothing could be more sound than believing; it is not credulity, but faith!

(Since I first wrote this little essay, this comic went up, rather neatly accentuating the above.)

What really puzzles me is that some people go on to describe this as a virtue. Some people would have us believe that it is better to go on blind faith than to use reason and critical thinking. I can see why they should like people to do so—but what benefit can this possibly have for the followers of the creed?—it is all too obvious how it benefits the leaders.

haggholm: (Default)

I'm pretty familiar with depression. I have experience of it both first-hand, second-hand, and third-hand (i.e. as a sufferer, as someone who knows sufferers, and as someone who has heard and read about sufferers). Depression launches a many-pronged assault on those who suffer from it; it is a subtle and complex thing. One of the more devious and peculiar ways in which this manifests is in the way it causes guilt in its sufferers.

There are some people, I'm told, who upon hearing that someone is depressed will tell the depressive It's all in your head; just shake it off, or something to that effect. This is, by and large, callous (if spoken out of anything but ignorance) and totally ineffective, except insofar as to effect guilt. I've never been told this, personally, but I didn't need to—I came up with the accusation all by myself. In fact, in my experience, most depressives seem to feel this. (It's not a large sample, admittedly, but 100% of a small sample may still be taken to indicate something…)

This often takes the form—often explicitly—of a feeling that Other people have it so much worse than me, yet I can't even cope with this. I don't know how often I've said it. I've lost count of the times I've heard it. Why, I would ask myself, am I such a whiner—incapable of coping with this little depression, when X dealt with an alcoholic father, Y has an emotionally abusive stepfather, and Z has dealt with so many crises—yet they have all come through strong where I am weak?

And it's really kind of strange. If I catch a cold—and I'm just getting over quite a bad one—I don't feel guilty about it. Why should I? I contracted a virus that my immune system was incapable of dealing with quickly and effectively enough, so I got sick. Theoretically I could perhaps have prevented it, by eating more vitamins, getting more sleep, washing my hands more often, or some other means—but oh well, I didn't; I got sick and now I'm better and that's the end of the story. I also don't feel shame that my immune system is (arguably) weaker than that of someone who was exposed to the same pathogens and didn't end up sick.

With a mental illness—such as depression—it's not like this at all.

Because it's a disorder of the mind, there is this notion that one should just shrug it off—Just get over it! But it doesn't work like that, as any sufferer of depression can tell you. My mind can no more shrug off depression than my body can shrug off a viral infection; much less easily, in fact, because by and large my immune system does a good job of keeping me on my feet. Why is this? Why is it that if I am brought low by a viral or bacterial infection, by fractures, contusions, concussions, or lacerations, by food poisoning or a chemical imbalance in my blood like high cholesterol or low blood sugar, that's acceptable (in a moral sense)—but if I have a chemical imbalance in my brain, I'm supposed to just shrug it off? Or—if my mental disorder is more complicated or less easily pinpointed than that—if my thoughts are just twisted into a Gordian knot, why is it that my mind is assumed to be a cutting blade?

I don't know the answer to these questions—the only real answer I have is If you talk like that to depressives, kindly do us all a favour and shut up because the tough love method only goes so far and only with certain people, and depressives as a rule have low enough self esteem as it is without being told they're lazy for not magically fixing themselves. But I do wonder, and it occurred to me that this might be related to a very peculiar division.

Did you notice that I made a very clear distinction in the earlier parts of this little essay between physical disorders and mental ones? Very possibly not—we're used to making that distinction—but I did, very deliberately, although I cringed every time I did it. Because, really, that distinction is largely nonsensical. It is, I believe, chiefly based on a lack of understanding—a primitive time when the physical nature of thoughts and emotions was entirely mysterious (as opposed to now, when it's only mostly mysterious…) and, critically, not understood to be a physical process. Understanding of anatomy was virtually non-existent (consider the ancient peoples who thought that the heart or stomach was the seat of consciousness!) and belief in spirits was rampant. These days it is intellectually evident not only that the brain is the physical seat of consciousness, but also that the mind itself is an emergent effect of very physical processes: Chemical, electrical, and so forth.

The consequences of this are, of course, complex and manifold, but primarily it creates a kind of disconnect in the way we regard ourselves. Nobody frowns at an expression like body and mind, as though they were separate things! We would not speak thus of body and liver, body and spleen, or body and large intestine. It also leaves room for concepts along the lines of mind over matter, which implies that mind is not of matter. Without this illusion or delusion, there'd be little room for ideas such as physical disorders being qualitatively different from mental disorders; they are all physical disorders, albeit some more intimately involved with the mens (or mentis: Latin for mind, hence mental et al.). I think it is detrimental in a much more general sense; it opens the way for so many mistakes of thinking: The body being subordinate to the mind, the mind's reactions to physical stimuli being shameful, and so on. In the context of the original topic, however, it is this: Depressives are often fooled, or fool themselves, into thinking that a mental disorder is non-physical; that the mind is non-physical; and that the metaphysical mind should therefore control the disorder.

In my own battle against depression, one of the largest milestones was overcoming this sense of shame. Ironically, what saved me was a massive tragedy. In 2004, my father died, suddenly and unexpectedly, of an aneurysm. Shortly thereafter, my younger brother, unable to cope with this on top of his own, previous problems, committed suicide. My family was of course plunged into a state of shock and grief—call it a form of depression. This is expected, and obviously nobody can blame anyone for feeling terrible after having lost two members of their family. In other words, I ended up in a state of deep and justified depression—a depression for which I felt no guilt.

I felt slightly worse than I had before.

This taught me a great deal. I realised that I had gone from deeply depressed to slightly more deeply depressed—but I had already been so low that the change was not really drastic. I also realised that People who have it so much worse cope better was preposterous, because I had already had it quite badly. It was not a set of external circumstances—like poverty or abuse—but circumstances of brain chemistry and so forth, but I had already had it badly, all the same. Suddenly people started telling me how impressed they were that I held together—how strong I must be—when I had said the same to others for years, and imagined myself so very weak! The disparity in strength was an illusion. Those people you may be looking at and admiring for their strength under pressure greater than that you suffer—they're just doing their best to hang on, just like you; you may one day find, as I did, that there's a low point where you keep going not because you are strong but because there is nothing else to do.

Or you may be overwhelmed by tragedy or bad brain chemistry or bad wiring in the head, and there's nothing more intrinsically shameful about this than passing out when you get up too fast or suffering from diabetes.

In writing this, I feel like I have expressed myself poorly, either because I wrote less well than usual or because the material demands higher standards. Whichever may be the case, I apologise, and I hope to revisit this at some point, rearrange it, and sum it up properly in closing. For now, I leave you with these thoughts in the hope that their relative rawness will encourage you to develop them further where I have failed to do so for you.

I hope, above all, that this does not leave you feeling pity for me or unequal to my strength—if you feel either, please re-read it, and read it more carefully this time! I write about my own experiences not to boast my strength, but to illustrate how this perception of strength is usually an illusion; not to show what awful circumstances I found myself in, but that depression is depression no matter the external circumstances and deserves to be taken seriously whether something obviously awful happened or not. I don't want pity for anyone, only understanding—and that being said, I'm happy to say that I've come a long way since 2004. These are things this essay was not meant to effect.

What I do hope—the ultimate (pipe?) dream in writing this essay—is that someone will read it who does feel this irrational guilt, weigh my words carefully, and find that his or her guilt is unnecessary—the condition is no one's fault—and the weight on guilt be lifted off shoulders that already bear a very real and legitimate burden.

haggholm: (Default)

I recently had an email exchange with someone (on a pretty good and amicable level, fear not) on the topic of religion. One thing that came up was the common Christian theme of free will, which supposedly justifies Hell. Humans, the argument goes, enjoy the privelege of free will. We may freely choose to believe in God and go to heaven and enjoy eternal bliss; or we may freely choose to reject God, in which case we will go to Hell. (I disagree that belief is a simple matter of choice, but this is tangential to the topic du jour.) I responded to this with an analogy:

Suppose I am the supreme ruler of a kingdom. I make a royal decree: You may wear any colour of clothing you want. You have a totally free choice in the matter. Your will is unbonded by mine. The only catch is that if I catch you wearing anything but green, I'll throw you in the dungeon, pull out your nails with pincers, and crush all the bones in your hands and feet. Still, you're free to choose that.

The analogy should be pretty obvious. The law of the land does not, in effect, say that you are free to choose what clothes to wear. Rather the law (whether we call it such or not!) demands that you wear green, and enforces it with a grisly punishment. If I tell people you're free to choose, that is merely a cruel mockery. Free will is not applicable under coercion.

The chief differences between my analogy and the biblical version is that the crime being considered is of a much more totalitarian nature in the Bible: Not a matter of doing, but a thought crime—I am, in this view, to be punished for the crime of not believing in a story that reason and evidence do not (in my view) support; and that the punishment is much worse—my hypothetical king merely tortured you for a little bit, while the Christian Hell is eternal torment! (Mark 9:43, Luke 16:19–28, …)

As an aside, I think that this issue is morally important. I consider any mainstream Christian who believes that I will go to Hell for my views malicious. I am of course personally quite untroubled by thoughts of Hell, being very skeptical indeed of its existence. However, if a person believes that I: God exists; II: God is all-powerful; III: God is perfectly just; IV: I will go to Hell for my unbelief, then I think that said person bears me malice. The reason is simple: If you believe that God will send me to Hell for my unbelief (IV), and God is all-powerful (II), then clearly (you believe) he has a choice in the matter and chooses to subject me to eternal torment. (I do not believe in any gods at all, but I assure you I would not make the choice of eternal torment, so this is not a simple free will issue.) As per (III), his decision was, in your opinion, a just and fair one. Ergo, you think it is fair that I should suffer eternal torment for not believing.

I've actually had someone tell me that they hold all of the aforementioned beliefs, and this person had the gall to take offence at my reacting unfavourably.

haggholm: (Default)

Originally posted as a comment to my friend Scott's blog, over here, but I think it stands well enough alone for me to post it here:

I think one of the major problems in all intellectual discourse (or lack-of-intellect-ual discourse) is that critical thinking is a skill not sufficiently well taught. At the risk of sounding arrogant, I tend to think of myself as a reasonably intelligent person; further braving the danger of the impression of nationalistic pride, I think that I have come through a respectable educational path: Swedish elementary and secondary schools, Canadian universities. Even so, and even given a natural tendency toward skepticism and a love of learning, I didn't learn to critically disseminate sources until rather recently; presumably it was gradual over the course of many years, but it reached a culmination (relative, of course, to whatever my present understanding may be considered to be) less than two years ago when I started to read research papers and came to realise that even at the cutting edge of scientific endeavours, there are opinions and errors and uncertainty, and even in very well-written papers by very intelligent and educated people, even in papers with terrific and accurate ideas, there may be errors. (Reading Darwin's The Origin of Species was perhaps a more obvious lesson in the same, although more recent.)

So where am I going with this? Well: Even with the resources available to me, I was still left with some nasty residual notion that purported factual writings fell into the two categories of true and not true. This is, alas, a dogmatic view—one with room to eliminate dogma I could identify as false, happily, but dogmatic nonetheless. Here we go back to Arrogance Land: I suspect that if I was left with so faulty a perception for so long, then a great many people—certainly not all! But a significant chunk of the population, whether large minority or majority great or small—are probably likewise impaired. Evidence certainly seems to back up my arrogance, here.

Now, I think it is essential for a rational view of the world to accept or reject every purported fact provisionally, and to keep in mind that even well-established facts like the theories of gravity, electromagnetism, and evolution have a probability of (very very slightly) less than 1, and even way-out-there conjectures like (equivalently) Christian theism, voodoo, and Russellian teapotism (with apologies to the late, great Russell for coining such a term) a probability (very slightly indeed!) higher than 0. P = 1 is dogmatic acceptance. P = 0 is dogmatic rejection. 0 < P < 1 is acceptable territory for rational thought. (Of course, we may often have ε ≈ 10-x for quite a large x…)

As I'm writing that, it seems to me that a reasonable hypothetical model for Truth would be a Bayesian network of facts…

haggholm: (Default)

It occurs to me that science, as a whole, is a process of memetic evolution. We could even borrow terminology from biological evolution: There are mutations and crossovers; the fittest survive—those most able to stand up to critical scrutiny … I think perhaps the largest difference is that while abiogenesis must be regarded as an uncommon event in nature (or at least, it is reasonable to assume that organic material that is now produced through some form of abiogenesis is consumed by extant, more advanced life forms), memetic abiogenesis—“anepistegenesis”?—is comparatively common. (Of course I do not mean that these ideas are spontaneous; they are properly derived from observation, but they have no memetic ancestors.)

Edit: It seems that Dr. Dawkins's adjective is memic, not memetic. Mea culpa.

haggholm: (Default)

So a while ago, I joined Facebook, mostly out of curiosity: What's this all about? Initially it seemed it wasn't really about much of anything, so I pretty much left it alone. The other day, however, I received a group invitation—it seems someone started a “UBC Atheists” group and searched for UBC students of that persuasion. I'm not sure if this will lead to anything at all, either; right now the group is in its infancy, so it's too early to tell. Anyway, someone posted something on the discussion board that I took exception to.

“Won't somebody please think of the children?” )

My generic answer, bla bla bla )

The good part: “Every sperm is sacred, every sperm is good …” )

He has yet to reply to this.

haggholm: (Default)

The thing that confuses me the most about Christian dogma is…well, I guess it's the lynchpin of the whole religion, really; it's the untenability of the notion of divine omnipotence and benevolence—and it's not even the usual “Why does evil exist?” argument.

As I understand it, the theology behind the death and crucifixion of Jesus is that

  • Humans are corrupt
  • Humans can't possibly redeem themselves to the point where they don't deserve to burn in Hell forever (a cynical and anti-humanitarian sentiment that I disagree with, but this is incidental)
  • Someone had to pay a price of blood and suffering for all this, so Jesus was selected as the collective whipping boy of mankind

How does this make sense to those who believe in it? I've heard people say things like “If you don't believe, God can't forgive you”Say what? Your omnipotent and infinitely benevolent deity is incapable of forgiving your failures unless you are a Christian?

No. No. I'm sorry, but this is bullshit. If you assume an omnipotent deity, then everything that deity does is out of choice—that's what omnipotence means. If said deity then condemns people to an eternity of suffering, then it is because the deity chooses to do so, not because of a lack of choice. You can then make up your own mind about the benevolence.

This musing was inspired by NormalBobSmith.com. I'm surprised I haven't seen him raise the question, really, but who knows—it may be somewhere in the hate mail section.

haggholm: (Default)

I really dislike credit cards. I have a VISA card—it's technically more like a debit card than a credit card in that money is withdrawn directly from my account, et cetera, but it's functionally identical to a credit card—but it makes me profoundly uneasy, because the security is so terrible. It's true that if your credit card is stolen and used, you'll probably be refunded (never had to deal with it), but this is definitely a hassle. What I want is a card that I can trust never to leave me to jump through any such hoops, and a VISA card definitely isn't it, because if someone stole my VISA card, they could buy whatever they like online, and make virtually unlimited purchases (up to my card's financial limits, of course!) in virtually any store. (“Virtually”, here, because cashiers are supposed to verify the signature, but how often do cashiers do that? And if they did, how hard would a forgery be? In Sweden, they usually either ask for photo identification or require you to enter your PIN, but even there, this can't be relied on.) True, they can't take out cash from ATMs without my PIN, but that's a small comfort.

Debit cards proper—like my Interac card—are better, because my card is useless without my PIN, so I'm not nearly as fearful of having that stolen. It still does have disadvantages, though—cards can get bent and damaged pretty easily, and there's the minor but very, very frequent frustration of non-standard card readers. If I walk up to the clerk in any given store, I don't know whether I'm supposed to hand it to the cashier or swipe it myself. If I'm to swipe it, the unit either has a slot on top, in which case I don't know whether the magnetic stripe should face me or away from me, or it has a slot on the right, in which case I don't know whether the stripe should face left or right. I then have to hit a few buttons—OK, select account, and so forth—whose locations are not standardised, so that I either can't use muscle memory or, as in my case, I have muscle memory from where I use the card most often and therefore hit the wrong buttons when I go elsewhere.

One swipey thing I own that I really do like, although not financially related, is my transponder key fob. It looks a little like this. Mine is hexagonal, about an inch across, maybe 3 mm thick, and unlocks the doors in the computer science building that I'm authorised to open when I swipe it across a reader outside. It sits on my key ring, where it's more quickly and easily available to me than my wallet, and is very sturdy. What I wish is that my bank would give me a transponder key fob, and that stores would have transponder fob readers as well as, or preferably instead of, card readers.

Convenience and sturdiness are two reasons. (Also, it doesn't matter which way I swipe it, so I don't have to figure out directions. And I can't see handing over my key ring to any cashiers, so “the customer does the swiping” would have to become a standard.) Another is that the fob is about the length of, and a bit wider than, the outer joint on my thumb. What this tells me is that this gadget is the perfect candidate for biometric verification. I want a fob with a thumbprint reader. Note—oh ye privacy nuts—that I wouldn't need to give any biometric information to my bank; I'd just need my print imprinted on the fob itself. The fob itself could then contain electronic identification in the same fashion as our current credit and debit cards, which would only be transferred if the biometric information is valid. For better security, this could be combined with a PIN, resulting in a financial gadget that you couldn't steal money out of even if you stole both the fob and the little note with the PIN I keep in my pocket. (OK, bad example. I don't keep such a note either in my pocket or anywhere else. But wouldn't you sleep easier knowing that your grandma's money is now safe?) To make things even better, your fob could track the transactions you authorise, so that if someone attempts to defraud you (by charging more money to the electronic information your fob gave to their reader after you'd authorised them), your bank could check your fob and verify that you actually made no such payments.

It occurs to me that one of the issues I have with credit cards, that of internet security, wouldn't be addressed by this at all. I can think of two reasonable solutions to this. One is a system that at least some banks use—my very good friend [livejournal.com profile] sheepykins's bank offers an online service that generates a one-time credit card number for online payments. This is kind of neat, but to me it seems more like a “clever hack” than a real solution—How do we give our customers some protection whilst working within a poor system? What I would much prefer is a “push” rather than “pull” payment system: Chapters/eBay/whatever gives me an account number (deposit only), I manually transfer the money from my bank's site, and there it goes. Or: Charging to my account only puts a request on my account, which I have to authorise in order for the money to go through. Yes, it would be an extra step in online shopping (per confirmation, at least—I could batch them up and authorise multiple transactions at the same time), but I'd be happier knowing that no one can take my money without my explicit say-so.

haggholm: (Default)

Just `thinking out loud` | http://livejournal.com/update.bml

First, a definition: I use the word “democratic” to refer to any government system that attempts to rule according to the will of the people (Greek demo, people + krátos, rule). This could be a direct democracy where every citizen gets to vote on political decisions, or it could be a representational democracy (with a proportional representation system, not the electoral college system of the US), or it could be something different that still comprises a best effort at representing the collective rule of the people. Please bear this in mind when you point out my mistakes and/or inconsistencies below; I'm not necessarily talking about your government, nor is it restricted to any single form of organisation.

Second, I think that there's no better general governmental philosophy than democracy, because any other philosophy necessarily consists in concentrating power in some subset of the population, and humans being the fallible creatures that we are, I believe that any such minority will unfortunately be prone to corruption in the long run. (An all-too-obvious example are the various “communist” regimes that have arisen and occasionally fallen around the world.)

The task of any democratic government system is to enact the will of the people. Typically, there is no better way to do this than to follow the majority opinion. This can be a terrible thing, it seems, because (I'm sure you'll agree) the majority is often wrong. I think so, I'm sure you think so, I expect most people think so. Unfortunately, this is extremely subjective, and while I may be utterly convinced that my views are correct and humanitarian (I really do think so), it's impossible to ethically legislate this sort of viewpoint.

The problem with this sort of government—where the majority opinion is respected—is that the majority will tend to get increased traction and political inertia. The almost inescapable consequence is that minority opinions are marginalised. If we make the (rather drastic) assumption that the ethical thing to do is to respect the contemporary majority view, then this works momentarily, but since opinions and ideas about politics, ethics, et cetera change it is virtually inevitable that this system will become outdated and fail to reflect the majority opinion further down the road. It is therefore necessary to a political system that seeks to represent the majority that it must protect the political clout of minority groups in order to function in the long run. And I don't mean ethnic minority groups. I mean radical political dissenters, like Saudi Arabians who believe that women are just as good as men, Texans who believe that it's not unethical to fail to own a gun, American abortionists, Swedish anti-abortionists, neo-Nazis, and the Ku Klux Klan.

In other words, both people we agree with and people whose very existance we find offensive.

Maybe you're waiting for a point here that will draw the above to a neat conclusion, but unfortunately I don't have one; this is something I've been thinking about for the last couple of weeks. My only conclusions so far are

  • Democracy is hard.
  • A “true” democracy (by my definition) would require an unalterable constitution that provides protection for political minorities, but everything else would need to be mutable. (It need not be easy. For example, while my knowledge of the political machine back home is very vague and shady, I do know that any change to the Swedish constitution needs to be passed twice—once, then again after the next national election.)
  • I'm not sure if there are any countries that are truly democratic.
  • A functional democracy must have anti-democratic elements in order to remain democratic in the long run; that is, democracy—as here defined—contains a necessary self-contradiction.
  • I'm not sure if a democracy can ever be truly ethical if (in order to remain democratic) it must protect the opinions (and right to express them) of evil people, such as racists and suppressors of freedoms, be they of speech or religion or any other thing.

Please note that there's nothing in the above that necessitates allowing evil deeds—such a philosophy may demand that we let rapists, racists, and Jack Chick speak their minds and attempt to vote and democratically legislate their way, but it does not demand that we allow them to rape, lynch, or obliterate all adherents of other religions.

Now, the real problem with all the above: Protecting political minorities is not nearly as simple as saying “even though you're female / racist / black / white / gay / transsexual / vegetarian / tall / whatever, you still have a right to vote”. It's much easier than that to marginalise—witness the problems of economically disadvantaged groups, who lack the education and financial clout to organise political movements to improve their situation. Or consider a country where homosexuality is entirely outlawed—who will dare to stand up and fight for gay rights? True, given my above definition he can still vote, but if that vote has to come from prison … I'm sure you can see the problem.

On this particular note of vagary, I will (fail to) conclude this little spewing forth of thought. Discussion is encouraged and appreciated.

Syndicate

RSS Atom

Most Popular Tags