haggholm: (Default)
2016-04-01 11:57 am

Free will

When people talk about free will, in the context of philosophy, without knowing the terminology of the field, they often seem to mean something like libertarian free will—a position not related (except etymologically) to political libertarianism: the belief that you “could have done otherwise”: your will is properly free if, and only if, having to re-make a prior decision under perfectly identical circumstances, you can choose to do otherwise the second time.

Unfortunately, I don’t think that libertarian free will is logically coherent. Any event—decision or otherwise—is either deterministic or non-deterministic. If it is deterministic, this means that it is causally determined: the state of the universe around me, along with my disposition in the form of knowledge or beliefs, opinions, desires, goals, and so forth, fully determine what I will do. If you were omniscient, you could in principle predict my every action. On the other hand, if the decision is non-deterministic, this means that there is an element of randomness to it: to some degree, my decision is not determined by reality around me, nor by what I think or want. Intuitively, this does not seem to me like “free will”: in fact, as much as the deterministic versions limits freedom, the non-deterministic version limits will.

As far as I can tell, libertarian free will is supposed to occupy some magical middle ground that’s neither deterministic nor non-deterministic. This violates the law of the excluded middle—that is, it requires propositional logic to be wrong! This seems absurd and prima facie wrong, and even if it were true we could ipso facto not reason about it.

Note that, although the terms often arise in free will discussions, I have not hitherto said anything about materialism and dualism. This is because I honestly don’t see that it particularly matters. As it happens, I am a materialist: I think that our minds are what our brains do. But my argument about free will does not depend on this. If you want to suppose that your mind is really some sort of non-material spirit stuff, this does not affect the dilemma between (non-free) determinism and (non-willed) non-determinism. Dualism does not solve the problem of free will, because the problem is not about physical versus non-physical causation, but rather about the logic of causality itself. Christian apologists often argue quite vehemently about this, because metaphysical free will is essential to their theology; but their arguments seem largely to amount to an assault on physical causation without ever addressing the true problem—and as a rule they are quite fond of the laws of logic, so the excluded middle remains a major problem. Put bluntly, they want to absolve their God of responsibility for the things that we do out of “free will” in spite of his supposed omnipotence and omniscience. It does not work.

If a logical exposition exists to get out of this quandary, I've failed to find it and would be fascinated to hear it, but I'm not holding my breath. As far as I can tell, attempts to salvage libertarian free will are less clear-headed philosophy than desperate attempts to justify what we all intuitively feel in the face of what is logically true.


Furthermore, although I can readily see the objections to free will raised by the spectre of determinism, it's not that clear that they have all their apparent force when you look more closely. Normally, I think of a free choice as one where no one is constraining or coercing me. It can be deterministic. It can even be predictable, which is much stronger than merely deterministic: if I strongly prefer chocolate to vanilla ice cream, and you know I do, I can still freely choose to have chocolate every time. The fact that you know doesn't constrain me. I could choose vanilla if I wanted to—the fact that, given that I don't want to, I never do, is precisely what makes my decision free, even though it is an explicitly determined choice!

In fact, every good decision is deterministic. If I choose according to my best knowledge and current beliefs, and make the choice that best aligns with my dispositions and desires, in the sense of (so far as my knowledge can tell) being optimal toward achieving my goals, that is a deterministic choice: but if I had some greater metaphysical freedom, it's still the one I’d hope to make. A non-deterministic component can only serve to randomly push me away from this optimal choice. Is that more free? And is it truly willed if it is random?

I’m not terribly excited about the term compatibilism, but I suppose that in effect, I largely am a compatibilist, and my ἀπολογία can be summarised as: The alternative to deterministic free will entails a freedom to randomly act against my own interest, which perverts the word freedom into incoherence. As Dennett might say, that kind of free will worth having is deterministic.

Perhaps the free will problem is best addressed by Ordinary Language Philosophy:

Non-ordinary uses of language are thought to be behind much philosophical theorizing, according to Ordinary Language philosophy: particularly where a theory results in a view that conflicts with what might be ordinarily said of some situation. Such ‘philosophical’ uses of language, on this view, create the very philosophical problems they are employed to solve. This is often because, on the Ordinary Language view, they are not acknowledged as non-ordinary uses, and attempt to be passed-off as simply more precise (or ‘truer’) versions of the ordinary use of some expression – thus suggesting that the ordinary use of some expression is deficient in some way. But according to the Ordinary Language position, non-ordinary uses of expressions simply introduce new uses of expressions.

[Internet Encyclopedia of Philosophy]

Maybe the fundamental problem of the free will debate is that it has developed a problematic concept of free will that wouldn’t exist if we didn’t have the discussion. In practice, determinism is perfectly compatible with every kind of freedom we care about or can measure; but in philosophy, philosophers and theologians have defined a problematic concept into being. In that case, we can explain it as being a matter of two different things: We do have free will, in the OLP sense, and this is compatible with determinism; but we do not have [libertarian] free-will—which, however, does not actually matter in reality.

haggholm: (Default)
2015-10-10 11:57 pm

Occam’s Razor is more than a guideline

Numquam ponenda est pluralitas sine necessitate

Occam’s Razor is a famous philosophical device, a pragmatic solution when faced with multiple competing hypotheses: always choose the one that necessitates the fewest additional assumptions.

Wikipedia contains this description:

Occam's razor (also written as Ockham's razor and in Latin lex parsimoniae, which means 'law of parsimony') is a problem-solving principle devised by William of Ockham (c. 1287–1347), who was an English Franciscan friar and scholastic philosopher and theologian.

The principle can be interpreted as

Among competing hypotheses, the one with the fewest assumptions should be selected.

In science, Occam's razor is used as a heuristic technique (discovery tool) to guide scientists in the development of theoretical models, rather than as an arbiter between published models. In the scientific method, Occam's razor is not considered an irrefutable principle of logic or a scientific result; the preference for simplicity in the scientific method is based on the falsifiability criterion. For each accepted explanation of a phenomenon, there is always an infinite number of possible and more complex alternatives, because one can always burden failing explanations with ad hoc hypothesis to prevent them from being falsified; therefore, simpler theories are preferable to more complex ones because they are more testable.

I’d argue, however, that the paragraph cited above actually contains at least the seeds of good reasons why it is more than a mere heuristic device. Consider: There is always an infinite number of possible and more complex alternatives, because one can always burden failing explanations with ad hoc hypothesis…. This means that for every concrete question, there is an infinite number of answers; one of them is maximally correct, some are plain wrong, and an infinite number fit the data but make unjustified and unparsimonious assumptions. But then, the simplest explanation that fits the data is actually very special, and not just because it’s more testable, but because of that privileged position. It alone accounts for the observed data without adding extraneous assumptions.

This leaves us with a choice, not just on a heuristic and testing level, but on an epistemological level, too: Do we accept only the one explanation permitted by the data yet spared by Occam’s Razor, or do we accept more explanations? If we do not restrict ourselves to only the simplest working possibility, I do not know of any reason why we should not accept all possibilities. Then, since we have an infinite number of possible explanations, whereof only one is maximally correct, the odds of our choosing the best solution are one out of infinity—which is to say, zero. Neglecting parsimony, then, does more harm than merely making it harder to test our hypotheses: it statistically guarantees that we will choose the wrong explanations!

So:

Occam’s Razor provides a rule for choosing a single explanation with strong heuristic properties and avoiding the arbitrary choice of complex solutions that, statistically, are certain to be wrong in detail.


That’s perhaps a bit abstract, so let’s ground it a bit. This actually came up in a discussion on religious epistemology, where I set up something like this: Agnostic (sometimes called “weak”) atheists make a negative existential claim, not based on the existence of positive evidence for non-existence, but based on the lack of positive evidence for existence. Or, in plain language: I’m not an atheist because I have evidence there’s no god; I’m an atheist because there’s no evidence of a god.

But then, runs a certain standard counter-argument, the agnostic atheist is on the same rational footing as the theist. Neither has evidence either directly supporting their position, nor directly refuting the contrary. (Perhaps, this may go on to say, the ideally rational stance is ‘strict’ agnosticism, apparently meaning a refusal to commit to any stance on likelihood.)

This, however, I reject on the basis of a stronger Occam’s Razor.¹ The reason is this: Theists and I agree on the existence of physical reality, each other, rocks, trees, suns, moons, and so on. When we run out of established physical reality, I stop. The theist goes on to add unsupported assumptions—and that’s where the trouble sets in. After all, if you are willing to accept one god without evidence, why not two? Or three? Or a billion? If you accept (though you cannot demonstrate it) that the universe was designed by God, how can you be sure it wasn’t actually designed by aliens pretending to be God? Or wizards posing as aliens pretending to be God? Or Smurfs dressed up as wizards posing as aliens… Well, you see where this goes. I can extend this list into infinity.²

If you are willing to accept any proposition without positive evidence, on the mere basis of inability or to disprove it, or impossibility of so doing, then either you must regard all such propositions as equally valid; or you must have a method of separating your proposition from the infinite number of other propositions with the same property (the property that it hasn’t been disproven, or is not falsifiable); or you are being completely arbitrary and no longer rational. But you can’t have a rational method for separating it, for if you did, it would have to be positive evidence, and you wouldn’t face this problem to begin; so either you are being arbitrary and non-rational, or you must accept them all.

And if you hold that the infinity of possible explanations is valid territory to enter, then your preferred explanation is wrong. How do I justify this assertion? Suppose that each explanation can be laser-etched onto a grain of sand, and we take all possible explanations and let the wind carry them into the sandy desert. This is an infinity of explanations, and as the text is too small to read, you cannot know which is which. With no positive evidence to point to any one explanation, your choice is arbitrary relative to the truth. Maybe one of these explanations is the correct one—but it’s one grain of sand in the desert; and it is an infinite desert. When you bend down and pick out a single grain of sand, I can be confident that you chose the wrong one.

I prefer a more consistent principle of reason, Occam’s Razor: Choose the simplest explanation that fits observations (id est, that isn’t falsified). If our investigation has been thorough enough, it is the right explanation. If not, then it is a good explanation to start from as we investigate further, and our investigation won’t be cluttered up by arbitrary (and almost certainly wrong) assumptions.

That is why, in the absence of existential evidence either positive or negative, assuming the negative is more reasonable than assuming the positive. We should be agnostic in the strict sense of being prepared to admit additional evidence—but that does not mean we should be holding our breath.


¹ This is pretty close to Hitchen’s Razor; in a way, it’s the two razors put together: Occam’s and Hitchens’s. Mine is a two-bladed philosophical razor!

² Or if not infinity, then at least until the text of my post exceeds storage limitations. I wonder if I could write a Haskell program to generate an infinite list of increasingly unparsimonious complications…

haggholm: (Default)
2015-10-10 08:54 pm

William Lane Craig and his bankrupt ontology

I recently watched a video of a debate between famous apologist and Liar for Christ, Dr. William Lane Craig, and well-known cosmologist and theoretical physicist, Dr. Lawrence Krauss. Obviously all my sympathies lay with Dr. Krauss, so it was with some mortification that I watched him apparently just fail to understand Craig’s distinction between epistemic and ontological basis for moral behaviour.

Those terms weren’t used in the parts I saw, but here is how I understand it:

  • An epistemic claim would be of the nature If not for God or revealed truth, we could not know what is morally right or wrong.
  • An ontological claim is different and asserts that God is the basis, not for the knowledge of moral truth, but the existence of moral truth.

In other words, the epistemic claim is concerned with how we can know what is right and wrong, while the ontological claim deals with how there can (supposedly) be a ‘right’ and a ‘wrong’.

Craig, for example, claims that everyone is designed to have an innate sense of what is right and wrong, and therefore does not claim that religion is epistemically necessary to assess moral propositions, but does claim that his god is ontologically necessary. This distinction is what Krauss loudly and repeatedly failed to appreciate.

That’s not to say that I think much of the argument itself. The standard objection is a chestnut that’s been around for well over two thousand years and never convincingly resolved: the Euthyphro Dilemma. Its modern formulation when addressing Christian dogma runs something like this?

  • Is God good because he does what is intrinsically good, or because what is good is defined by what God commands?
  • If the former, then there exists an objective moral truth outside of God, who is therefore not ontologically necessary.
  • If the latter, then “God is good” is a circular and hence meaningless claim, and in fact whatever God commanded would by definition be “good”, regardless of whether it resembles what we in actuality think of as good.

Craig is a firm believer in the latter option, and to his dubious credit he carries it all the way by affirming the so-called Divine Command Theory. According to DCT, if God says to kill every man, woman, child, and head of livestock in the land you invade (1 Samuel 15), then it’s right and morally good to do so; and Craig has consistently defended this view: The genocide described in the book of Samuel¹ was morally right. It was morally good to kill all those babies.

Personally, I find this view reprehensible if not downright monstrous. But there are further problems with this view that I don’t see brought up.


If God defines Good, he cannot be trusted

If whatever God wills is (by definition) good, then “good” is arbitrary (as is often pointed out). But this is not merely a problem for ontological grounding. Christian apologists like Craig argue that it’s not arbitrary, because to do other than what is in fact (as we instinctively see it) good is against God’s nature…but so what? On the view that good is defined by God’s will, there’s no real reason to suppose that it cannot change tomorrow. Craig would probably raise a lot of arguments to the effect that God has promised not to, it’s not in his nature, and so on; but how does he know that? Under DCT, it’s not wrong for God to deceive Craig about what his nature is: if he wants to, it’s good by definition. It’s not wrong for him to change his mind about what’s good: if he wants to change his mind, that’s good by definition. In fact, it’s rather Nineteen eighty-four-ish: It is wrong to kill people. It has always been wrong to kill people and always will be. It is good to kill Amalekites…


Craig fails to notice the beam in his own eye

But there’s a deeper yet much simpler problem with Craig’s view, which is this: He says that what God wills is by definition good, and that God has the right to determine this because he created the universe, owns us all, and has the right to do with us as he pleases. But this is a naked assertion. Craig claims that DCT provides an objective view of morality, meaning presumably one with no arbitrary propositions accepted axiomatically, and yet ultimately even his own moral view is arbitrary and axiomatic, too. When Krauss says it’s bad to cause suffering, Craig asks Why?—fair enough, and I fault Krauss for failing to understand this question: I think Craig is right when he implies that Krauss is relying on what amounts to an arbitrary axiom.² But Craig’s own argument is no better, because when he says that God’s will defines what is good, even someone who agrees with him might well ask Why? Craig will say it’s because God created and therefore owns the universe and everyone in it: to this I would retort Why does creating the universe give him the right to do what he wants with it? Craig spends a good deal of time insisting that you cannot get from a factual to a normative statement—you can’t get from an is to an ought—and then he blithely goes and does that very thing in the very same breath.


¹ Fortunately, it most likely never actually happened.

² Philosophically arbitrary—of course, it’s not arbitrary in terms of our neural wiring.

haggholm: (Default)
2014-09-12 12:40 pm

The No True Theist fallacy

Whenever a religious fringe group rises up in arms, be it Al-Qaeda, ISIS, Christian murderers of abortion providers, or whatever, pundits amass to fight at the steps of the podium to be first to proclaim that what those people do is not motivated by religion, that "real religion" is not like that. This is bizarre, and either dishonest or foolish.

Let's be clear: I don't like Islam, but there are about a billion and a half Muslims out there who aren't terrorists, the vast majority of whom would (I presume) be no more eager to decapitate people than I would. I am not suggesting that, for instance, ISIS aren't a fringe group. Of course they are. (And of course there are lots of non-Muslim Arabs, and the large majority of Muslims aren't Arabs to begin with.) Nor do I think that Islam is inherently more vicious than Christianity, though the latter has been somewhat defanged by the Enlightenment.

That said, it's very odd that these commentators always insist that any evil whatsoever cannot be motivated by religion, as "properly" understood. It's always other factors -- political, historical, cultural. Of course, all that context is always significant, and sometimes religious divisions are secondary (IRA?), but claiming that it's about history and culture instead of religion is an implicit assertion that religion has no influence on culture and history. If someone says that people are never motivated to evil by religion, they're implying that people's beliefs do not influence their behaviour; or perhaps that religious beliefs aren't important enough to be acted upon.

Well, that's what they would be implying, at any rate, were they not busily committing logical fallacies to protect, pardon me, the sacred cow of religion. If someone does something nice and credits “do unto others” or “whatsoever you do unto the least of my brothers”, if Muslims give to charity and say it's because the Quran tells them to, everyone is happy to accept their stated motivation. But the moment they do something bad, it is widely denied that their motivation could possibly be what they say it is, even if they can cite verses in their support. “No religion condones the killing of innocents,” said Obama, apparently unfamiliar with Psalm 137-9, Hosea 13:16, and other pleasant tidbits.

I don't believe in any of this. I believe that when someone claims to act out of religious conviction, the possibility should be entertained that they may be telling the truth, whether the act be good or evil; moreover, that even if an interpretation is a minority view, that doesn't disqualify it from being religious. I believe that many people take their religion seriously and do act on their beliefs, sometimes to great good and sometimes to great evil. Let me repeat that: Religion in general, and certainly the big monotheistic ones, can motivate people just as easily to good and evil. That, precisely, is the problem—not that people of any religion are somehow intrinsically evil, but that people mistake scriptures for moral compasses.

haggholm: (Default)
2012-08-13 02:00 pm

“Evolved brains would be fallible, ergo evolution is epistemically unsound”

Some theologians and apologists (notably, I gather, the fairly famous Alvin Plantinga) hold forth a curious epistemic argument purportedly in favour of their theism. The argument in one form (not due to Plantinga) goes like this: Evolution optimises organisms for survival and gene dispersal, not correct beliefs, which would be favoured only if they enhance the above.

That is, because there’s no telling whether unguided evolution would fashion our cognitive faculties to produce mostly true beliefs, atheists who believe the standard evolutionary story must reserve judgment about whether any of their beliefs produced by these faculties are true. This includes the belief in the evolutionary story. Believing in unguided evolution comes built in with its very own reason not to believe it.

Now, the most obvious problem with this argument is of course that evolutionary theory does give us a reason to suppose that we can arrive at true beliefs, because it is difficult to conceive of any process whereby a tendency toward mostly false beliefs would be beneficial for survival or gene dispersal. I'm sure some scenarios can be dreamed up where fortuitous misconceptions would cause an animal to behave in a manner just as good, or even better, as correctness, and certainly we know of (and science corrects for) some tendencies toward, for instance, false positive errors and other biases, but here we must imagine something both subtler and more pervasive, and in particular a mechanism that accepts sensible input from the exterior world and systematically transforms this input into beliefs that are erroneous and yet more advantageous than the simpler mechanism of apprehending reality…

Still, I don't think that's the argument's worst problem. After all, the assumption of some divine entity provides no more guarantee that your senses are accurate than a naturalistic view! On the contrary: Although it's not a logical proof, I think I have outlined a good reason to think that evolution is in fact likely to produce brains capable of apprehending reality, not perfectly but with at least some fidelity. Assume the existence of an all-powerful being, on the other hand, and that all goes out the window. What grounds have you to suppose, if such a being exists, that the beliefs it chooses to have your brain produce are correct? It is completely arbitrary! The apologist might conceivably argue that his God is a God of truth and so forth, but those are just more of the same arbitrary beliefs. On the assumption that an all-powerful being exists, which can manipulate your senses and beliefs as it sees fit, your are at the utter mercy of its intentions; and its intentions are unknowable, because it can make you believe whatever falsehood it wants, and every “evidence” you might have that your vision of this god is the right one is equally susceptible to (infallible) falsification.

Ultimately, both atheists and theists must assume some fidelity of their senses a priori, whether they wish to admit it or not. Although every epistemology needs its axioms, the naturalistic world view introduces no more than necessary, and people like Plantinga and his fellow admirers of the Argument from Arbitrariness (if you will) would do well to avoid casting stones in glass houses, for once you assume the existence of ultimate beings, everything is arbitrary.

haggholm: (Default)
2012-04-17 10:39 am

“It takes faith to be an atheist”

Tell a Christian that you are an atheist because you find the evidence for theism thoroughly unconvincing and the odds are pretty high that you will, at some point, be told that he doesn’t have enough faith to be an atheist, or that you need faith in the non-existence of gods just as much as he needs faith in the existence of his. At first blush, this sounds at once superficially reasonable, obviously false, and profoundly bizarre.

It sounds superficially reasonable, because the objection that my atheism is not founded on an absolute certainty and absolute proof is of course correct. It sounds obviously false because the word “faith” is typically used to describe a positive belief in something for which there is insufficient empirical evidence, and is not a word suited to describe skepticism, whether justified or unjustified. It sounds profoundly bizarre because many Christians use the word to describe a purported virtue of trusting in the existence and benevolence of their god in spite of the lack of such evidence (the substance of things hoped for, the evidence of things not seen).

Part of the problem is that the word “faith” is a vague one on which we may both equivocate and have genuine misunderstandings. I use it to describe belief that is not justified by rational evidence, because in any situation where there is evidence we have other words to describe it, but I recognise that anyone who uses the word in conversation with me may mean just that, or equally well something different, such as a religious belief that they perceive to be supported by evidence, as a synonym for “confidence”, or something else altogether.

Then again, a disingenuous approach some debaters will use is to conflate them intentionally, a logical fallacy known as equivocation. You might say that I have “faith” that if I sit down my chair will bear me up, just as you have “faith” that your god exists—but they are clearly not the same kind of faith, since I have ample evidence that my chair will support me, and furthermore this evidence is available to anyone who wants to inspect it: You could (if you truly doubted it) have photos, videos, contemporary eyewitness testimony, or if you were truly dedicated you could come visit me and see for yourself. Moreover, the supportive quality of chairs is not contrary to anything in common experience; it’s not (as Sagan would say) an extraordinary claim. This approach is apparently used to justify the evidence-free kind of faith by implying that it is equivalent to obviously rational forms. It is not. My confidence in chairs is based on facts and observations that could be amply supported against someone skeptical of chairs; unless you can provide facts and observations in favour of your deity, it’s not the same thing at all—and if you can then let’s talk facts and evidence, not “faith”.


More promising is the notion that I need faith to be an atheist—faith not quite supported by evidence, that is—just as the theist needs faith to be a theist. Some theists, indeed, are known to dismissively quip that “I don’t have enough faith to be an atheist” (by implication of which faith is a bad thing, since more of it leads to us sinful atheists—but that is by the way). However, this also falls down flat on closer inspection.

First of all, we all subscribe to most of the same basic premises or assumptions in dealing with the world, theists and atheists alike. We all operate on the assumption that the external world is real and that our senses provide us with systematic information thereof. Even a hypothetical, reductio-ad-absurdam biblical literalist has no choice: Without the empirical evidence of his eyes and ears, he could read no scripture and hear no sermons. So clearly, in terms of the basic appreciation of what exists, we start from the same position.

Entia non sunt multiplicanda praeter necessitatem, as Occam’s Razor slices, and I choose to stop there. I accept the truth of premises that cannot be denied without resort to solipsism, but thereafter I demand evidence before I accept anything as true. This post goes into more detail, but in brief, since it is always possible to invent an infinitude of ideas, explanations, and purported entities, my choices are always going to be either refusal to accept any without evidence, attempting to accept all of them, or picking and choosing in an ad hoc fashion.

This all sounds rather abstract, so let’s consider this tweet from @repenTee:

@haggholm as I think about it ur conjectures are based on faith no evidence 2 prove that God doesn't exist somewhere in the universe.

(Pardon his spelling; it’s a tweet.)

The problem with this protestation is that although it is true that I have no direct evidence that no such thing as his God is floating about somewhere in the interstellar void, nor do I have any evidence that there aren’t two gods. Or three. Or ninety-six point four. Or, for that matter, a giant magic space-duck ’round whose mighty bill six supermassive black holes revolve. This shows the insufficiency of “there is no direct evidence against it” as an argument to accept any proposition: It opens the gates to all manner of silly things. I want to remain intellectually consistent, so I must approach all these disparate and sometimes contradictory claims (there is exactly one, are exactly two, three, four gods… cannot all be true) with the same approach. I do, and so accept only the ones whose existence is supported by good evidence. Therefore I am an atheist.

(This is of course what Russell’s Teapot was created to illustrate, along with its more modern successors—the Invisible Pink Unicorn, Sagan’s invisible dragon, the Flying Spaghetti Monster, and so on.)

So as Bertrand Russell observed,

…I were to go on to say that, since my assertion cannot be disproved, it is intolerable presumption on the part of human reason to doubt it, I should rightly be thought to be talking nonsense.


I believe that this sufficiently deals with equivocation, and dismissing the idea that the lack of positive disproof of a proposition (in spite of lack of positive evidence for it) is sufficient grounds to believe in it. We’re left, then, with the notion that the atheist’s confidence that there are no gods is on par with the theists’s faith in his because both positions have evidentiary support. The same @repenTee provided this frank and illustrative example in a blog comment:

…The faith we've entered into is not without evidence. Much as biologists observe cellular structures so we have observed nature and from it conclude that these things have been created by God. As we have observed people, places and things we conclude that something greater than ourselves must exist. Who this God is from that point we may differ but the theist never concludes that God exists apart from evidence....

Unfortunately, the analogy with biologists falls rather flat when we consider that the biologist’s inference from observation is only the first stage of scientific investigation. In the canonical simplification of scientific inquiry, this is observation leading to hypothesis formation. A biologist might for example observe cells in agar, see some interesting things, and conclude that cells reproduce by fission…but it doesn’t end there. If a biologist submitted a paper to a journal with no more substance than “here’s what I saw and here’s what I conclude”, it would be rejected and might not even receive the grace of a note explaining why. Rather, the biologist must use this point as a starting point only and ask questions. If I am right, what does that imply? What else should I be able to see? Can I follow up on that, and do I see what I expect? More importantly, what if I am wrong? What should I expect to see if I am wrong, and can I check up on that?

Indeed, some very great scientific truths have been discovered thanks to ideas that were arrived at in very ad hoc fashion, but turned out to be true. August Kekulé famously arrived at the structure of the benzene molecule from a dream of the Ouroboros, a snake biting its own tail. Einstein developed a lot of ideas from Gedankenexperiments and his sense of scientific aesthetics. The ultimate source of an idea is not so very important, whether empirical observation or irrational impulse—you may observe nature and draw the wrong conclusions; you may hallucinate and by chance have a correct idea. The key is not where the idea comes from, but how we can tell if it’s correct or erroneous.

This is of course the principles of falsifiability and (implicitly) replicability, two of the great cornerstones of the scientific enterprise. We accept no one’s word that something is true just because it seemed reasonable from what they saw. We expect them to explain in quantitative detail what difference their idea makes, so that we can make predictive statements and check whether it’s correct. Note that this goes beyond merely looking for consistency. I can make up all kinds of crazy ideas that are consistent with facts. I can claim that the world is such as it is because the giant magic space-duck willed it to be so, and this is consistent with facts. But it’s not an idea to be taken seriously because I cannot say “If the space-duck exists then we should observe X; if it does not then we should observe Y.” Before I accept the truth of a proposition, the existence of any entity, it must be clearly meaningful to say that it is false—and of course that meaning must turn out to be counterfactual.

So let us return to the quote from above:

…The faith we've entered into is not without evidence. Much as biologists observe cellular structures so we have observed nature and from it conclude that these things have been created by God.

At this stage, what’s been described is hypothesis generation. There’s nothing wrong with generating hypotheses, and no wrong way to do it (only more or less productive ones), but hypotheses must not be mistaken for validated theories, for truth. How do you know that your idea of divine creation is correct? What predictions have you (or any theist) ever made that would detect divine agency—what evidence should be sought to verify that your god created something rather than just natural processes? If you have not looked for it, then it’s not comparable to what a proper biologist does at all; it’s the brainstorming phase, not the publishable work that actually gets a scientist respect and tenure.

This is also the big problem with a deist god. Certainly it violates no evidence, but nor does it leave any evidence or make any predictions. To say that there is a god, but it leaves no traces of itself for us to find, only sounds less crazy than to say the same of a magic space-duck because we are culturally conditioned to take gods more seriously.

The objection to deism is also applicable to certain views of theism—that is, those that fall into the trap of the God of the Gaps. Over the centuries, some defenders of religious faith have insisted that what we cannot scientifically explain must be the work of their god—the orbits of the planets, say, or the origin of life. As Kepler, Newton et al explained orbital mechanics, these defenders of faith had to admit that the planets weren’t pushed along by their god—but “ah”, they’d say, “gravitation itself is surely the power of God”. Along comes Einstein and explains gravitation as geometry, the consequence of deformations in spacetime, and gravitation turns out not to be an intangible force after all. “Ah!”, exclaim the defenders (or their intellectual descendands), “but then spacetime must be due to God.” And so on—with every new discovery, their god is redefined so as not to conflict with facts. But this god can never generate a meaningfully falsifiable prediction, because every falsification is inevitably explained away with a new redefinition.

Indeed, earlier versions of these beggar-gods, deities who would hide in any nook or cranny that science had yet to illuminate, did generate falsifiable hypotheses, such as “the planets could not remain in stable orbits but for the mystical power of God”—which turned out to be false, neatly disproving them.


The only gods that remain to be dealt with are the ones with more meat on their bones—ones who generate falsifiable claims: Gods such that their followers ought to be able to come up and tell me: “These are the verifiable (or falsifiable) differences between two models of the world: One such as it is or would be with my god in it; one such as it is or would be without him.” That is a god that needs to be evaluated on an individual balance, the evidence for and against it weight—especially that against it (as attempted falsification yields better evidence than mere consistency-with-established-facts).

I’d welcome such falsifiable evidence.

haggholm: (Default)
2012-03-21 11:19 am

Nucleons of empiricism, phlogistons of faith

I won’t pretend to have an attempt at a full-fletched epistemology, but something I often ponder and would like to set in words for my own clarification is my opinion on what knowledge can be based on. As someone who occasionally gets into arguments over religion or philosophy, I consider it important to know what fundamental basis I am really attempting to argue from.


First, let us recognise that a superior epistemology should make as few assumptions as possible. If we are to reason, we must use logic, but logic is but a way of taking facts (premises) and figuring out what other facts (conclusions) are implied by them. It can’t introduce new knowledge per se, and while it can point out problematic premises by showing inconsistencies, it cannot supply correct ones. Thus on some level we have to simply assume some premises—as few as possible (the more we have, the more we risk error) and as safe and inarguable as possible.

To me, the most fundamental source of knowledge is and must be physical reality. This may sound uncontroversial or at least unsurprising coming from me, but let me clarify: I believe that physical reality must hold epistemological primacy even over logic (and its broader-scope cousin, mathematics). Logic is important and a critical tool for reason, but it follows from reality, not the other way around. (You might recognise this as the opposite of what the ancient Greek philosophers generally held.)

Some have held that perception of physical reality can’t be accepted as fundamental, because our senses are flawed. Certainly no one can prove to every pedant’s and solipsist’s satisfaction that we do not, for example, live in a computer simulation, or in Plato’s cave; that reality isn’t in fact with our perception of consistency an illusion. All these notions, though, seem to share in common the attribute that they are completely unproductive. If my mind is randomly recomposed moment by moment, with memories and perception of continuity mere illusions, then ipso facto I cannot effectively reason about anything.

If you tell me that I should trust in your words, or the words of some sacred writ, because my eyes and ears deceive me, I will respond that if my eyes and ears deceive me, I surely cannot trust words either written or spoken. If you tell me that I should believe in something or other because my ability to reason is limited and fallible, then why should I be convinced? If I find that argument convincing, I am ipso facto convinced by means of faulty reasoning.

No, surely to say anything meaningful about anything at all, we must accept that there is an external reality and that, for all their flaws, our senses and perceptions at least provide some kind of systematic picture thereof. It may not always be correct—in fact we know of lots of ways in which our perceptions often fail us—but if it is at least basically systematic (within the margins, as it were, of measurement error), then this gives us a chance to address the truth, aided by statistics and probability, augmenting our memories with records (so long as we can read them), our senses with instrumented perception (so long as we can read the dials with reasonable fidelity), our fallible reasoning with formal logic.

I believe that everyone (at any rate, anyone who is not insane) essentially believes this (in part because I believe that people who argue that reality is an illusion and our memories may well be recreated moment by moment are really just playing word-games, actually living their lives quite in accordance with conventional notions of continuity and cause-and-effect). Even people who relegate empiricism to a distinctly secondary position after, say, faith in some religious dogma still accept this, whether they admit it or not. Without accepting the testimony of their senses, they wouldn’t have any cause to know that any scripture exists or what it says.


Very well, so we accept a sort of basic empiricism: The world exists, and our senses report on it, if not perfectly then at least systematically so that we can by dint of intellectual effort untangle systematic errors and gain a clearer picture. What else do we need? Until recently I should have said logic—an argument needs premises and a valid formulation; empiricism gives us premises; logic provides the formulation; ergo we need both.

However, as my second point, I believe that logic is secondary to physical reality and need not be taken as a fundamental.

Perhaps my biggest light-bulb moment in formulating this thought was rendering explicit the fairly obvious observation that the logical syllogism is really no more than a mathematical restatement of the physical principle of cause and effect.

logicformal logicempiricism
if A, then BABA is observed always to cause B
A [is true]AA happened
therefore B [is true]Btherefore B happened

In other words, I conclude that logic is simply a description of cause and effect, just as F=(m₁×m₂)/G is a description of (Newtonian) gravity, rather than itself (qua formula or idea) anything fundamental. Reality would go on as usual even if nothing within it had any concept of logic. However, if reality did not proceed according to the laws of cause and effect, there could be no logic: If we existed, we should have nothing to base it upon, nor would it be applicable to anything. It could at best be a self-consistent but meaningless system of symbol manipulation.


Third and finally, I believe that we need nothing else at the very bottom of our epistemology. There is reality. It is necessary (because without observation of reality there can be no knowledge); it is also sufficient. Observing reality naturally generates the laws of logic, which however complicated they get ultimately flow from the basic syllogism, which is itself a statement of the empirically observed principles of cause and effect.

Of course any meaningful argument about anything whatsoever, unless it be epistemology itself, is naturally going to invoke much higher-level principles. The rules of logics are the atoms of arguments, syllogisms the molecules; only when we care about the subatomic do we need to bother to point out that the logic-atoms are really made up of empirical nucleons. However, I am aware of no good reason why I should take seriously any argument that does not render down into this empirical nucleon soup if sufficiently picked apart.


I don’t pretend to be able to reduce most arguments to their nuclear details, but this does not mean that I abandon the idea. I don’t pretend to be able to explain every minute detail of a burning match down to the level of atomic interactions and changes in valence electron layers, either—this does not reduce my confidence that the standard model of physics is in principle perfectly capable of explaining that burning match without having to involve phlogistons. If someone attempted to convince me of the reality of phlogistons, my ignorance of details would not be sufficient grounds for me to accept it: They should have to directly demonstrate the reality of phlogistons, or that my physical theory is in principle insufficient to explain fire.

Similarly, i you introduce any other principle into an argument—faith, for instance, or curious notions such as epistemological relativism—I shall regard any such principle as a phlogiston, whose existance and relevance you shall have to substantiate before I take any part of your argument seriously. Unless you can do that, explain yourself in terms of observable reality, or be dismissed.


My earlier post, Science and epistemology, contained the germs of this idea. In How I try to think, and how I try not to I muse on how to apply the idea, and common pitfalls to avoid.

haggholm: (Default)
2012-03-12 10:47 am

Impact of parenting; and evidence versus my intuition

To change one’s mind when presented with sufficient evidence is a hallmark of a rational person. This is the ideal of the scientific method, and the failure to pursue it is the bane of human rationality. We are burdened with various cognitive biases and shortcomings that make all us humans naturally bad at it: We tend to seek out observations that confirm our beliefs and credit them when we find them; we tend to be more critical and skeptical of observations that contradict what we believe to be true. I often speak at length about this, criticising others when they insist in the face of evidence.

So what about me, then? When do I change my mind?

I must regretfully admit that I can’t think of a great many examples. Probably no small part of this is due to the fact that no one, however much they may appreciate the importance of evidence and perniciousness of cognitive biases, is actually immune to those biases. I do my very best to re-examine my beliefs when rationally challenged, but I suspect that every one of us carries a great many beliefs obtained for irrational reasons that, correct or incorrent, we just never come to critically re-examine. As a child you were taught a thousand thousand things, and as a child you had no choice but to absorb them, no framework for critical evaluation. Probably you will not re-examine all of those beliefs in your entire lifetime.

I’d like to think that another significant part of this is that I try not to form beliefs without a rational basis. I like to think that I rarely say anything that is flat-out wrong, because I try to avoid making claims that I’m not confident about. Maybe there’s something to this—I hope so—but no one is infallible; I am inevitably wrong about some things, ergo there must be beliefs I ought to change, but have so far failed to.

Maybe the most obvious example of an area where I have changed my mind is religion, but it seems kind of trivial. It was only as a child that I was capable of blind faith, the conviction of things not seen; I grew up and grew out of it when I realised that there just wasn’t anything supporting it, and I was firmly atheist long before my voice changed. For a long time I held the curious faitheist position that although it’s mistaken, it’s still somehow noble and worthy of respect to have committed faith; I have changed my position here too, recognising that holding irrational beliefs is inherently bad (and in fact intellectually a much worse crime than happening to reach erroneous conclusions). But all that is rather trivial; the total dearth of supporting observations makes it childishly easy to discard.


A much more recent, complicated, and difficult belief was upset some time last year or the year before, when I first started reading and learning how little parents matter to the personalities of their children. Steven Pinker summarises it in this video; the gist of it is that for most behavioural metrics,

  • up to 50% of the variation in the trait is genetic;
  • 0%–10% of the variation is due to parenting/upbringing;
  • the rest is due to culture, peer groups, &c.

This is illustrated by facts such as

  • adoptive siblings are hardly more similar than people picked at random;
  • monozygous twins reared apart by different parents tend to have very similar personalities, even if they are raised in very different environments and never meet.

I found this surprising. Indeed, if a fact could be offensive, this would be pretty close to it. Parenting doesn’t matter? Intuitively this makes roughly no sense at all to me. My parents matter intensely to me. Surely they shaped me? I can identify many, many traits, beliefs, and tendencies that correlate incredibly well with my parents. For better or worse, I think of myself as very much my father’s son, and I share many of his strengths and weaknesses. I have the same intellectual bent that he had, and many of the same interests.

And I value my parents. My father was a very flawed man, but he was always good to me, I got along well with him, and I loved him in spite of all his many flaws. My mother is wonderful, and I often consider myself very lucky in that she is so accepting, so ready to have a grown-up parent/child relationship with me, even when we deeply disagree on things. The notion that their influence on me was much less than I had thought seems…disparaging.

But the fact of the matter is that surprising and counterintuitive though it may be to me, that doesn’t alter the truth one whit, and I know damn well that intuition does not trump evidence. There are various studies on the subject, and I gather many are summarised in The Nurture Assumption by Judith Rich Harris, which I really ought to read at some point… If the evidence contradicts my intuition, then I should discard my intuition, not the evidence.

There are also perfectly good explanations for the observed correlations under the working theory above. Of course I resemble my parents in many respects: I share 50% of my genetic material with each of them, and just as I look quite like my father did when he was young, demonstrating that he contributed to my visible phenotype, so he surely contributed to my behavioural phenotype, as well. And while I wasn’t brought up in quite the same environment as my parents were, still there were surely similarities.

Additionally, I can think of hardly anything more conducive to confirmation bias than an informal analysis of a child’s resemblance to its parents. Of course I can think of commonalities: After all I spent eighteen years living in the same house as my parents, and had extremely ample time to learn just what traits and behaviours I shared with them.

Finally, I think that the deep personality traits that psychologists measure—agreeability, neuroticism, and so on—are probably less tangible, less open to obvious observations, than more superficial behaviours. It’s surely true that I read Biggles books as a child because my father had done so when he was a boy, had saved the books, read them aloud to me for a while. But this is a very superficial behaviour compared with whatever personality traits make me someone who enjoys shutting himself in with a book.

Of course, all of this is just reinterpreting old data in a new framework: Take the observations I made under the paradigm of “I am this way because parenting so made me”, and reinterpret them under the paradigm of “Parenting doesn’t matter nearly so much; genes and social environment are more important”. This is a fine thing to do, but were I unable to account for these data, still I should have to bow to the evidence: My personal, anecdotal observations do not trump the data.


I should add that I am not convinced that no kind of parenting can have fundamental, important effects. I vaguely seem to recall reading, and at any rate I have seen nothing to contradict this belief: That a truly poor environment, such as abusive parents, can have deep and terrible effects on a child. I do not base this on any real data, so I will not vouch for its truth at all, but until I read otherwise this is my working hypothesis: Terrible parents can psychologically damage their children and have disproportionate influence, for the worse. Parents who aren’t terrible, though, have surprisingly small effects on personality, and while a good parent is a very different creature from a terrible one, the differences in outcome vary surprisingly (disappointingly!) little between mediocre, good, and great parents.

Here, though, more data are needed.

(You may protest that people who are particularly good and responsible tend to have children who grow up to be particularly good and responsible. To this I say: Recall that these are people who may be genetically predisposed to be particularly good and responsible, and with up to 50% heritability in most personality traits, it’s no wonder if that is passed down.)

haggholm: (Default)
2011-10-19 11:51 am

Continuum of violence, levels of force

You never want to go to the ground in a street fight!

You may have heard that sort of thing before, if you’ve ever heard anyone discussing or dismissing the values of various martial arts for the purpose of self defence:

  1. BJJ¹ is great for competition, but it relies too much on the rules. In a real fight there are no rules.
  2. BJJ may work one on one, but grappling won’t work against multiple opponents.
  3. Going to the ground is the last thing you want to do in a real fight. It’s a sure way to get stomped by the assailant’s friends.

There are various quick dismissals of some of these (and similar points). For example, dirty fighting isn’t the magic bullet some imagine; ball grabbing isn’t so easy and trying to punch someone in the groin, if they know a bit of grappling, is not very effective. And the history of MMA and the UFC empirically and convincingly taught us that while a complete martial artist must be competent in all three phases of unarmed combat—free-moving standup, clinch, and ground—still the reality is that grapplers almost always seem to defeat strikers if the latter have not trained in wrestling/grappling enough to learn some takedown defence. In modern MMA, of course, everybody is more rounded and a grappler with no striking won’t last long, but this is precisely because even the strikers who prefer to keep the fight standing have learned enough grappling to remain standing or defend and disentangle themselves and get back to their feet.

As for multiple attackers, well, if you’re outnumbered you’ll probably lose no matter what you do. I agree that keep moving, don’t get trapped, and don’t go to the ground is good advice, but I think clinching and quickly throwing is pretty effective too, and in a real multiple-opponent fight it seems absurdly optimistic to assume that things will always go your way and you’ll never get grabbed and taken down against your will. Grapplers have the advantage of knowing how to get to a top position, disentangle themselves, and stand back up. Still, one-on-several fighting is a losing proposition regardless of what you do. And in one-on-one situations—the situations you have any realistic chance of winning—grappling works just fine. (In that classic bugbear of a violent rapist assaulting a lone woman—actually a small percentage of rapes—BJJ², with its highly developed guard and techniques for variously choking or breaking an opponent who is trapped between your legs—seems so well-suited that it might be custom made.)

But all that is by the way. The truly puzzling thing about all this is the assumption—sometimes tacit and sometimes explicit—that “real world self defence” comes in precisely one variety: Life-or-death combat against an enemy with mortal intent and against whom lethal force is justified. (Sometimes this is codified in that reliable old false dichotomy: It’s better to be judged by twelve than carried by six.) Furthermore, every multiple opponent fight is assumed to consist of the hypothetical protagonist being outnumbered. (Perhaps the exponents of these street-lethal martial systems just don’t tend to be out among friends. One might facetiously ask why.)

In reality, I’ve never been in a really serious fight, and probably you haven’t either. The last time I was in a “real” fight it was on the schoolground. Perhaps you’ve been in a barroom brawl or some retrospectively stupid ego fight. These are not fights where lethal force is justified. It may be trivially true that it’s better to be judged by twelve than carried by six, but all things considered it might be better to swallow your ego and back down, or even take a few bumps and bruises and four stitches in the ER, than to spend two years in prison and acquire a criminal record.

The reality is that different violent situations require different levels of force and different kinds of violence. Here, I would argue that grappling arts like BJJ are inherently superior to striking arts, because while boxing is an excellent martial art and combat sport, and a very effective way of defending yourself should it come to it (especially if you have at least rudimentary grappling skills in case someone eats a punch and bulls his way into a clinch), it simply does not have a continuum of force that allows you to gently control a situation. It’s perfectly legitimate when used against the hypothetical psychotic attacker, but wildly inappropriate when your usually-pleasant friend gets rowdy and drunk at a party. Punching someone is fair if they are trying to punch you first, but if somebody is pushing shoving you and gearing up to a fight, perhaps it’s wiser to just trip them up, take them down in a controlled fashion, and place them in a pin until they calm down.

Furthermore, it’s simply not true that every conceivable “street fight” involves you being outnumbered. If some truly heinous asshole assaults you outside a club, would you rather break his teeth and gouge his eyes out with some (hypothetically effective) Krav Maga, or place him in a loose guillotine and wait until the bouncers (or the cops) come and drag him away? Both approaches protect you from violence; only one of them protects you from legal consequences of using excessive force.

And of course it’s not like BJJ lacks responses to situations where greater levels of force are necessary. If my life were truly on the line I wouldn’t try to go for a gentle takedown and a pin; I’d go for the hardest takedown I could muster—maybe a hard double-leg, or perhaps a throw inherited from judo such as harai goshi or uchi mata; and slamming someone down on pavement is certainly no less effective than punching them in the face. (It’s true that BJJ players tend to be much weaker on takedowns than judoka or wrestlers, but we still have more training than the average untrained schmoe. If the schmoe in question is not in fact unskilled and possesses enough grappling skills to block takedowns, well, then you’d better have some grappling know-how to counter his!) We learn control positions and pins, but we also learn joint locks that can put limbs out of commission, and best of all, chokes and strangles that can rapidly render an assailant unconscious more reliably than any other technique. (It would take another minute or two of strangling after the point of unconsciousness before death sets in, so it’s unlikely to happen accidentally. Compared to concussion-inducing strikes, or joint locks, chokes are relatively low-risk methods of putting people out of commission.)

I’m not trying to paint BJJ as a be-all, end-all system of self defence here. For my more complete thoughts on that subject see this post, but in brief I think that unarmed combat is a final and rather poor line of self defence, after avoidance, negotiation, escape, and armed defence in that approximate order. I also don’t wish to elevate BJJ over other grappling arts such as judo, wrestling, or SAMBO, which all have different strengths and weaknesses but are all fantastic. Nor do I mean to devalue striking: A complete martial artist should be able to handle himself both standing (freely or in the clinch) and on the ground. MMA, not grappling, is where the martial arts reach their peak in applicability for unarmed one-on-one combat (though rulesets such as Daiko Juko/Kudo, sanda/sanshou, combat SAMBO, and similar are also very excellent). What I do think is that grappling is an essential part, and that if you’re self defence oriented and can choose only one part (which would be unfortunate), grappling is in fact more important than striking. I also think that a lot of criticisms are weak and unfounded and deserve to be deflated.

In summary, whenever you hear somebody say that you never want to go to the ground in a real fight, don’t just nod in instinctive agreement with common wisdom, but instead stop and recognise that

  1. …in the highly unfortunate circumstances where you are compelled to use force in self defence, context matters;
  2. …whenever milder levels of force are called for—restraint rather than incapacitation—grappling offers far more options than any amount of striking
  3. …grapplers learn to fight on the ground and off their backs, but are not limited or required to, as takedowns are vital parts of the game and e.g. BJJ has a strongly developed strategy for getting to the top and a dominant position;
  4. …many of the horror scenarios designed to point out the folly of grappling—say, multiple attackers or weapons—are scenarios you’re unlikely to win anyway, and it may be more sensible to consider the value of different approaches for scenarios where any approach is workable.

As an example, here’s cell phone footage of a drunk man accosting renowned grappler Ryan Hall in a restaurant and being forcibly subdued without a single punch thrown and without causing injury.


¹ I use BJJ as an example because it’s particularly prone to this kind of criticism and because it’s the art I myself practice and am most familiar with, but it applies pretty much equally to judo, sport SAMBO, submission grappling, Greco-Roman, freestyle, or collegiate wrestling, and so on.

² This is an exception to ¹.

haggholm: (Default)
2010-11-19 05:58 pm
Entry tags:

“IOC to issue gender case guidelines”

Via a CBC article here, the International Olympic Committee is promising to deliver guidelines on how to deal with athletes with ambiguous sexual characteristics. This was brought into the limelight with the controversy over the South African runner, Caster Semenya, in 2009.

This’ll get long; if you are impatient, or are uninterested in or offended by my own opinions, jump down to the quote below on what the IOC had to say.

My thoughts ) What the IOC said ) Footnotes )
haggholm: (Default)
2010-11-08 11:39 am

Pointless Firesheep countermeasure, BlackSheep

In a spectacular display of missing the point, a group of “security researchers” released a Firefox plugin called BlackSheep, designed to combat Firesheep by detecting if it’s in use on a network and, if so, warning the user.

To explain why this is at best pointless and at worst harmful, let’s recapitulate what Firesheep does: By listening to unencrypted traffic on a network (e.g. an unsecured wireless network), it steals authentication cookies and makes it trivial to hijack sessions on social networks like Facebook.

Let’s use an analogy. Suppose some auto makers, Fnord and eHonda and so on, were to release a bunch of remote controls that could be used to unlock their cars and start the engines. Suppose, furthermore, that these remote controls were very poorly designed: Anyone who can listen to the remote control signals can copy them and use them to steal your car. This takes a bit of technical know-how, but it’s not exactly hard, and it means that anyone with a bit of know-how and a bit of malice can hang around the parking lot, wait for you to use your remote, and then steal your car while you’re in the shop.

Now suppose a bunch of guys come along and say Hey, that’s terrible, we need to show people how dangerous this situation is, and they start giving away a device for free that allows anyone to listen to remotes and steal cars. What this device is to Fnord and eHonda remotes is exactly what Firesheep is to Facebook, Twitter, and so forth. What’s important to realise is that the device is not the problem. It does allow the average schmoe, or incompetent prankster, to steal your car (or use your Facebook account), but the very important point is that the car thieves already knew how to do this. Firesheep didn’t create a problem, but by making it trivial for anyone to exploit the problem, they generated a lot of press for the problem.

What the Firesheep guys wanted to accomplish was for Facebook and Twitter and so on to stand up and, in essence, say Whoops, we clearly need to make better remote controls for our cars. (It’s actually much easier for them than for our imaginary auto manufacturers, though.) True, Firesheep does expose users to pranksters who would not otherwise have known how to do this, but the flaw was already trivial to exploit by savvy attackers, which means that people who maliciously wanted to use your account to spam and so forth could already do so.

Now along come the BlackSheep guys and say, Hey, that’s terrible, the Firesheep guys are giving away a remote that lets people steal other people’s cars!, and create a detection device to stop this horrible abuse. But of course that doesn’t address the real point at all, because the real point has nothing to do with using Firesheep maliciously, but to illustrate how easy it was to attack a flawed system.

This is stupid for several reasons:

  1. If BlackSheep gets press, it might create an impression that the problem is solved. It isn’t, of course, firstly because Firesheep wasn’t the problem to begin with, and secondly because BlackSheep only runs in, and protects, Firefox.

  2. People running badly protected websites like Facebook could use BlackSheep as an excuse not to solve the real problem, by pretending that Firesheep was the problem and that problem has been solved.

  3. Even as a stop-gap measure, BlackSheep is a bad solution. The right solution is for Facebook and Twitter and so on to force secure connection. Meanwhile, as a stop-gap solution, the consumer can install plugins like the EFF’s HTTPS Everywhere that forces secure connections even to sites that don’t do it automatically. This is a superior solution: BlackSheep tells you when someone’s eavesdropping on you; HTTPS Everywhere prevents people from eavesdropping in the first place.

    Let me restate this, because I think it’s important: BlackSheep is meaningful only to people with sufficient awareness of the problem to install software to combat it. To such people, it’s an inferior solution. The correct solution is not to ask consumers to do anything, but for service providers (Facebook, Twitter, …) to fix the problem on their end; but if a consumer does anything, BlackSheep shouldn’t be it.

As I write this post, I hope that BlackSheep will get no serious press beyond a mention on Slashdot. It deserves to be forgotten and ignored. In case it isn’t ignored, though, there need to be mentions in the blogosphere of how misguided it seems.

Let’s all hope that Facebook, Twitter, and others get their act together; meanwhile install HTTPS Everywhere.

haggholm: (Default)
2010-09-10 12:01 pm
Entry tags:

Building mosques and burning Qurans

I don’t really understand the furor over these recent events: The resistance to a mosque near Ground Zero, the rage at the Florida priest who wants to burn Qurans.

Of course it’s offensive to burn Qurans. It’s offensive to me because book-burnings are generally violent and anti-intellectual protests and seems to follow in the footsteps of some very nasty movements that we’d do well not to emulate. It’s offensive to Muslims because—well, I don’t think I even need to finish that sentence. Yes, it’s offensive. That’s the point—like it or dislike it.

But in a society with free speech, we never have the right not to be offended. This should be starkly obvious to everyone paying the least bit of attention to the current debate, because both ‘sides’—if we grossly oversimplify this into Christians vs. Muslims for the sole purpose of making this paragraph simpler—are obviously offending each other. The Christians down in Florida are offending the Muslims by threatening to burn Qurans. The Muslims who want to build a mosque next door to Ground Zero¹ are offending an awful lot of people, too. Is it not slightly curious that when these [particular] Muslims are offending people, the promoters of tolerance tell us that we must be tolerant and therefore let them go ahead; whereas when other people are offending the Muslims, the promoters of tolerance tell us that we must be tolerant and therefore shut the hell up? Doesn’t this seem a bit one-sided?

Not only do we have no right not to be offended; it is not possible to get through this, or most any heated discussion, without someone getting offended. Someone will get offended however any of these debates turn out. Accept it. Get over it. Move on.

Now, personally (as if this mattered to anyone else!), I would be happier if nobody burned any books, even blatantly offensive ones like Bibles and Qurans and the writings of Martin Luther. I would also be happier if nobody built a mosque near Ground Zero. Well—I would be happier if nobody built any mosques at all, but to place it there reeks of either a desire to offend or of blatantly poor taste. I’m not suggesting that these people² don’t have a right to build their mosque wherever they own land with the proper zoning; I’m aware of no reason to think that anyone has any right to prevent them from building it. Still, it seems in poor taste at best: In spite of the fact that the overwhelmingly vast majority of Muslims are not in fact terrorists, it remains true that Ground Zero was the site of a horrific tragedy promoted and powered by religious fundamentalism, and a mosque is a place to honour and celebrate that same religion. I would feel the same way about a cathedral at Béziers, a Lutheran church at Auschwitz, a monument to America in My Lai. If you belong to a group that was associated with something horrible somewhere, the proper thing to do is probably to distance yourself from what they did and avoid glorifying your movement on the site of the atrocity.


Let me digress for a moment. I’m not suggesting that all Muslims are aligned with terrorist action. Not only are most Muslims in no way involved in atrocity; there are (unsurprisingly) overtly Muslim organisations that openly, honestly, and vocally campaign against the various atrocities often committed in the name of their religion. A brief search will find, for instance, the Free Muslims Coalition, who [promote] a modern secular interpretation of Islam which is peace-loving, democracy-loving and compatible with other faiths and beliefs. But the lamentable fact is that even they write that

The Free Muslims Coalition is a nonprofit organization made up of American Muslims and Arabs of all backgrounds who feel that religious violence and terrorism have not been fully rejected by the Muslim community in the post 9-11 era.

The Free Muslims was created to eliminate broad base support for Islamic extremism and terrorism and to strengthen secular democratic institutions in the Middle East and the Muslim World by supporting Islamic reformation efforts.

The Free Muslims promotes a modern secular interpretation of Islam which is peace-loving, democracy-loving and compatible with other faiths and beliefs. The Free Muslims' efforts are unique; it is the only mainstream American-Muslim organization willing to attack extremism and terrorism unambiguously. Unfortunately most other Muslim leaders believe that in terrorist organizations, the end justifies the means.

This is laudable—and tragic in equal measure. Laudable that they stand up to do this. Lamentable that they feel that they are unique; lamentable that there is broad base support for extremism and terrorism (laudable though their efforts to combat it may be); lamentable that they feel that they are the only mainstream American-Muslim organisation willing to unambiguously oppose it. Hear what else they have to say:

As written recently by Khaled Kishtainy, columnist at Al-Sharq Al-Awsat Newspaper, "I place on the Islamic intellectuals and leaders of Islamic organizations part of the responsibility for [this phenomenon] of Islamic terrorism, as nearly all of them advocate violence, and repress anyone who casts doubts upon this. Naturally, every so often they have written about the love and peace of Islam – but they did so, at best, for purposes of propaganda and defense of Islam. Their basic position is that this religion was established by the sword, acts by the sword, and will triumph by the sword, and that any doubt regarding this constitutes a conspiracy against the Muslims."

The Free Muslims finds this sympathetic support for terrorists by Muslim leaders and intellectuals to be a dangerous trend and the Free Muslims will challenge these beliefs and target the sympathetic support given to terrorists by Muslims.

Would that the Free Muslims were the majority voice of Islam. Alas, this does not seem to be the case. There are imams condoning terrorist actions. There are imams and ayatollahs issuing fatwahs calling for the death of people like Salman Rushdie and Ayaan Hirsi Ali. There are uproars and uprisings at things like the Danish cartoons. Sure, most Muslims do not participate. But where are the voices decrying and condemning it? Why do I always hear Not all Muslims are like that; you can’t judge them all by those atrocities, rather than Not all Muslims are like that; you can’t judge us all by those atrocities—because we condemn them? Where are the counter-fatwahs and apologies to Rushdie and Hirsi Ali; where are the apologies for Theo Van Gogh and others like him; where are the imams standing up and saying Yes, I found those Danish cartoons offensive, but nowhere near as offensive as the reality that people are willing to kill and destroy merely because they are being teased?


Given that the majority voice of Islam is not the Free Muslims—given that the loud and clear speakers are the issuers of murderous fatwahs and not those who oppose them—I hardly find it surprising that people take offence to the erection of a mosque near Ground Zero. They have no right to prevent it, but they have a right to be upset—and they have a right to wonder (I certainly wonder) what is the motivation of the people who chose to erect it there. Who are the builders? Are they Free Muslims who wish to have a holy site to say, Look what happens when you take it too far, when you value holy writ over human life—these are our people, and we are here to apologise on behalf of Islam, to distance ourselves from it, to make penance? If so, good on them and build on, please. If not, then who, and why? —But again, people have a right to be upset but not to stop it. I think this is clear. I think this is an obvious consequence of a free society.

Is it not, then, obvious that the same must apply both ways? The Florida priest who wants to burn Qurans is offensive—in fact, deliberately offensive where I don’t know what the motives at Ground Zero may be. People have a right to be upset. But just as the Muslims behind the Ground Zero mosque have a right to build it, even though it offends people, so the Florida Christians have a right to burn Qurans, even though it offends people. Both actions are offensive to some. Both actions are offensive to me. Both actions should be subject to criticism—but, too, the right to commit both actions should be defended.

What is the nature of the protest against the Quran burning? It will offend people! Why, yes, but so what? It will exacerbate the crisis in Afghanistan. It will incite terrorist action. Indeed, I’m sure it will. But is that any reason not to do it? I thought that it was important to a free, constitutional democracy to stand up for the right of its citizens to be free, to express themselves freely. I thought the United States of America prided itself on doing this. This is why I think that the constitution of the United States is a wonderful, beautiful thing, and though I am not American and have never lived (and will likely never live) in the United States, I admire it greatly and applaud the foundation of a country upon its principles and amendments.

So the people who say You should not do this because it will incite the enemy have it backwards. The purpose of the military arm of the United States of America is surely to protect the safety and the human and constitutional rights of its citizens, to ensure that every American is free to exercise his right to free speech. You should not avoid offending terrorists (Muslim or otherwise) to protect your soldiers; your soldiers are there to protect you from terrorists who seek to prevent you from speaking your mind.

Is the plan of the Florida priest to burn Qurans offensive? Why, yes. It is both literally and figuratively incendiary. But what’s really offensive is that some people might treat this action—the burning of a pile of papers—as sufficient justification to burn buildings and murder people. If you wish to condemn only one of these, then please condemn the latter. If you wish to condemn them both (I encourage it), then please condemn the latter a thousand times harder.


Building a mosque at Ground Zero might make terrorists happy. They might regard it as an ultimate victory, the erection of a monument to their faith at the site of their victory. But they are wrong. Allowing anyone, even if they turned out to be terrorist sympathisers, to build whatever the hell they want at Ground Zero is a victory of the values of liberty. Let everyone who walks by that mosque cast dirty looks; let not one of them cast a stone or grenade or wrecking ball. (If you vandalise it, you aren’t fighting terrorists, you’re becoming terrorists.)

You know what’s really letting the terrorists win? Saying that someone shouldn’t be allowed to offend them. The people who say that the Florida priest should not burn Qurans because it will incense the terrorists are allowing those terrorists’ atrocities to constrain their actions and freedom of speech. That’s what terrorism aims to do. That is letting the terrorists win.


¹ I may refer to these people as these people or these Muslims. Please note that I am doing so in a context-aware fashion. I’m not saying this because I lump all Muslims in with them, or because I think that those people in a voice laden with contempt is an appropriate tone, but because I am talking about a particular group of people. Feel free to be offended by my post, but please try not to misconstrue it.

² See ¹.


Errata: The facility is not at Ground Zero, but near it. I have a few instances of each in the text above, and at was simple error on my part.

See comments: The applicability of the word mosque is debatable. Read the comments and decide for yourself.

haggholm: (Default)
2010-09-07 11:24 am
Entry tags:

Science, models, and reality

To some, science is all about models.

Exactly what is the purpose of science? It depends on whom you ask. Some might say that it aims to find the ultimate reality of things (to the best of our ability). Others might say that this is, in fact, a ludicrous idea. All that we can do is to construct the best models possible to predict and describe reality. I think this is a useful way to think of it.

You may have heard the joke about the physicist who was asked to help a dairy farmer optimise milk yield, worked on his calculations for a few weeks, and came back confident that he had found a solution: …First, let’s assume that we have a perfectly spherical cow in a vacuum…. Well, rather than poke too much fun at this fictional physicist, I actually think that (if his math was right) this is actually a very good model. The reason why I think so is that it’s immediately clear that, if the calculations based on the model are valid at all (and checking that is what empirical science is for), there’s a domain in which it is useful, but we are not tempted to extend the analogy beyond its proper domain. In a slightly more realistic example, we could calculate the features of a head-on collision between cars and consider them as simple lumps of material, and use a model that we originally devised to figure out the collision of lumps of clay. The model is useful (in that we can calculate some features of inelastic collisions), but we are not tempted to extend the car–clay analogy beyond the domain in which the model is useful: We know that cars behave very unlike lumps of clay in many ways (just as we know that cows sometimes act decidedly un-spherically, e.g. in their mode of locomotion).

So far, so good. What these models have in common is that they use examples from common experience (we are all passingly familiar with the ideas of cows, cars, spheres, and lumps of clay) to explain other phenomena that take place more or less in the realm of common experience. However, things can get very different when we move to radically different scales. Consider, for instance, when we use models (analogies) to explore and explain large-scale features of space, or small-scale features of particles on the atomic or subatomic level.¹


Consider the atom.

You may know that the atom has a nucleus and a bunch of electron. So, it looks like a lump of spheres (positively charged protons and electrically neutral neutrons) comprising the nucleus, orbited by a bunch of negatively charged electrons, equal in number to the protons. It all looks rather like a solar system with the electrons standing in for planets, orbiting their star, the nucleus. And this is a very good model that is not only evocative to us laymen, but has helped scientists figure out all sorts of things about how matter works. We can even elaborate the model to say that electrons spin about their axis, just like planets do; and they can inhabit different orbits—and change orbits—just like planets and satellites can.

Unfortunately, virtually nothing I said in the preceding paragraph is really true in any fundamental sense about what atoms are really like. It’s true that Niels Bohr’s model of the atom is rather like that, and that it is indeed helpful—but that’s not what the atom is like. It is here that the common sense familiarity of the model deceives us laymen into overextending the analogy to domains where it is no longer useful and valid. For instance, when I say that an electron can change its orbit, you may imagine something like a man-made satellite orbiting the Earth at 30,000 km, whose orbit decays gradually to 20,000 km. But an electron behaves nothing like that: It can only ever exist in certain orbits, and when an electron goes from one energy state to another, it goes there immediately. It is physically incapable of being in between. It’s as though our satellite suddenly teleported from its higher to its lower orbit, but even weirder because the satellite not only doesn’t, but in fact cannot ever inhabit an altitude of, say, 15,000 km.

Nor is it true that electrons and other subatomic particles spin in the sense that planets do. It’s certainly true that they have certain properties, called spin, and that if you use the same mathematics to work out their consequences as you would usually apply to simple spinning objects, you get good results. In this sense, a spinning sphere is a good model for a spinning subatomic particle. But I gather that there are at least some subatomic particles that have the rather curious property that they have to go through two full revolutions to get back where they started.

In fact, when I said that the atom looks like a lump of spheres, et cetera, even that isn’t fundamentally true. Everything you perceive with your sense of sight consists of objects with considerable spatial extent, and your sight is a complicated function of the fact that photons of varying energy levels bounce off them. You can’t do this with atomic nuclei, let alone electrons, because they are too small. Shoot a photon at an atom and you will change the atom. (The fact that quarks are said to have colour is pure whimsy. Consider the fact that one type of quark is “charm”, and only one of them is formally labelled “strange”.)


One of my favourite ideas—not an idea of my own, mind!—is due to John Gribbin, author of In Search of Schrödinger’s Cat: Quantum Physics and Reality. In mentioning peculiar things like the above, where electrons ‘orbit’ nuclei and electrons ‘spin’, he suggests that you may cleanse your mind of misleading connotations by reminding yourself that these properties are not really the ones you’re familiar with at all. Instead, he suggests (if I recall correctly) not that spinning electrons orbit nuclei, but rather that gyring electrons gimbal slithy toves—I’m sure I’m getting this slightly wrong, but I am very sure that I am using the right words from Lewis Carroll’s Jabberwocky.

`Twas brillig, and the slithy toves
  Did gyre and gimble in the wabe:
All mimsy were the borogoves,
  And the mome raths outgrabe.

The key observation to keep in mind is that, to paraphrase Richard Dawkins, humans have evolved to have an intuitive grasp of things of middling size moving at middling speed across the African savannah. They make sense to us on a visceral level. The large and the small—stars and galaxies and clusters, or atoms and electrons and quarks; the slow and the fast—evolution and geological change and stellar evolution, or photons and virtual particles and elemental force exchange: We did not evolve to comprehend them, because comprehension thereof had no adaptive value for Pleiocene primates (well, and the data were not available to them). We can only describe and predict them using science and mathematical models, and we can only make them seem comprehensible by constructing models and analogies that relate them to things that we do seem to understand.

But every so often we encounter natural phenomena that just don’t seem to make sense. The quantum-mechanical particles that simultaneously travel multiple paths and probabilistically interfere with themselves appear to contradict all common sense. But perhaps this is only because we attempt to think of them as little balls moving in much the way that little balls do, only faster and on smaller scales. It may be mathematically sensible to say that an electron travels infinitely many paths at once, with differential probability—which is clearly contradictory nonsense. Right? But in reality, there’s no such thing as an electron simultaneously travelling multiple paths; it’s just outgrabing rather mimsily. And the only reason why we have to model it as though our middling-scale phenomenon of probability were at issue is that we don’t have the ability to appreciate the gyring and gimbling of the slithy toves.


Now, if you enjoyed reading that, you’ll enjoy Gribbin’s In Search of Schrödinger’s Cat even more. Go forth and read it.

¹ It is a fact, sometimes held up as remarkable, that on a logarithmic scale of size from elemental particles to the universe as a whole, we’re somewhere in the middle. Once or twice, I have even heard people mention this as though it were imbued with mystical significance, in a sort of muddled anthropic principle:

On the smallest scales, elemental particles are too simple to do anything very interesting; on the largest scales, the universe is just a highly dilute space with some fluffy lumps called galaxy clusters floating around. In the middle, where we are, is where the interesting stuff happens: Large enough to combine the elemental effects into highly intricate patterns, but not so large that the patterns average out.

Such thinking is pretty fluffy and dilute—of course (the weak anthropic principle) we are necessarily on a level where “interesting” stuff happens, as reflecting brains would not occur on levels where they cannot, but I also think that this is an artefact of thinking that could apply to any phenomenon logically intermediate between other levels, so long as it’s the intermediate level that the observer is interested in.

On the smallest scales, cellular respiration just deals with chemical reactions too elemental to be very interesting; on the largest scales, muscle fibres just aggregate in big lumps that do nothing more than produce boringly linear forces of contraction. In the middle, where the individual cells are, is where the interesting stuff of semipermeable membranes, ionic drives, mitosis, protein synthesis and folding, and all that happens: Large enough to combine the elemental chemical reactions into highly intricate patterns, but not so large that the patterns average out.

haggholm: (Default)
2010-09-06 08:51 pm
Entry tags:

Thought and language

I think it’s pretty well established that the Sapir-Whorf hypothesis is not true—at least not in the strong version that states that thought is formed by language, as immortalised in Orwell’s Nineteen eighty-four, where Oceania’s totalitarian regime attempts to eradicate the very idea of liberty by removing any linguistic means of expressing it.

Modern research shows that this couldn’t possibly work. You don’t think in English, or Newspeak, or any other human language. Rather, you think in what neurolinguist Steven Pinker calls Mentalese—that is, your brain has its own internal representations of concepts, presumably more amenable to storage in neuron connections, axon strength, or however it is that your brain actually does store things.

But this does not mean that there is nothing to it, and the area is still controversial but still actively researched, with some researchers arguing that there do exist real effects and others arguing that there are none. Famous examples include various words for directions: English speakers most comfortably use relative terms like in front of, behind, left, right, and so forth, while there are languages—some South American and Australian ones, I believe—where no such words exist: Instead all directions are expressed as cardinal (“compass”) directions, thus you would not be left of the house, but instead north of the house. And, some research indicates, native speakers of such languages perform better than English-speakers at tasks involving cardinal directions, but worse at tasks where relative arrangement is crucial.

And personally, I know that while there are many things that I don’t remember any context for whatsoever, there are some facts whose source and linguistic context I recall very precisely. For instance, I know that I first learned the verb imagine from The Legend of Zelda: A Link to the Past for the SNES, where the antagonist, at the final confrontation, declares that I never imagined a boy like you could give me so much trouble. It’s unbelievable that you defeated my alter ego, Agahnim the Dark Wizard, twice! (I admit that, significantly, I did not recall the exact phrasing.)


Different languages express things in very different ways. I could quickly rifle through Steven Pinker’s The Language Instinct to find some really interesting examples, but instead I’ll just recommend it to you as a wonderful book on mind and language and go directly to a more pertinent example. I am currently re-reading Simon Singh’s (excellent) The Code Book, which—in discussing the cryptography used during World War II—contains a brief section on the Navajo language. It has some features very alien to speakers of Germanic languages. For instance, nouns are classified by ‘genders’ very unlike, say, the Romance language masculine and feminine nouns, or the Swedish ‘n’ versus ‘t’ genders. Instead, you have families like “long” (things like pencils, arrows, sticks), “bendy” (snakes, ropes), “granular” (sand, salt, sugar), and so on. Conjugation can get pretty complex.

But Navajo is also one of the languages that contain a rule of conjugation that I find extremely interesting: If you make a statement in Navajo, it will be grammatically different depending on whether you describe something you saw for yourself or something you know by hearsay.

I find this very fascinating and also very, well, useful. I wish English had rules like this! In fact, I wish it had at least four: One for things I have experienced myself; one for things I have by hearsay; one for things which I believe because it is my impression that evidence overwhelmingly favours them; and one for things that I do not necessarily believe at all. I’ll think of them as “eyewitness”, “hearsay”, “reliable”, and “unreliable”.


Now, I believe it’s fact that we all tend to suffer from some degree source amnesia. That is, you go through life and absorb all kinds of factual statements (correct or incorrect). At the time when you hear a claim, you will hopefully evaluate its reliability based on its source—peer-reviewed science, expert opinion, intelligent layman, speculative, uninformed. (I roughly ordered them.) However, as time goes by, we have a tendency to remember putative facts but forget their sources. Thus, with time you risk ending up with a less reliable picture of a field of knowledge wherein you imbibed many different putative facts, as you start to forget which facts came from what sources and so which facts are more or less reliable.


And from all that, I can finally ask the question currently on my mind: Given that language may have some effect on one’s thinking, and given that word choice may stick with the memory of a putative fact, would imbibing putative facts in the context of a language wherein the source is grammatically encoded help us to retain the memory, if not of precise source, then at least of a form of ‘credibility rating’?

Sadly, of course, it’s a question I am completely unable to answer. I wonder if any experiments have been run. If not, I wonder if anyone could gather up some Navajo volunteers and find out…

haggholm: (Default)
2010-08-24 10:50 am
Entry tags:

Offhand dismissal of self-refuting philosophies

1. Nihilism

There are no values. Everything is meaningless.

Dismissal: If that’s true, then it is ipso facto pointless to believe in it.

2. Relativism

There are no absolute values. Any set of values is equally valid in its own context.

Dismissal: Including, presumably, my set of values—viz., that it is important to apply universal standards.

3. Derrida-esque post-structuralism? post-modernism? deconstructionism? drug-induced dementia?

There is no absolute truth. Even the words true and false stem from misapprehensions of the world.

Dismissal: Clearly, that cannot be true. If you even assert that it is true, you are contradicting yourself. (I have seen someone attempt to defend this with a straight face; I do not understand how.)

haggholm: (Default)
2010-08-03 05:34 pm
Entry tags:

“3D” movies and other media

There are few things so sure to annoy me as hype. Among those few things, of course, is factual inaccuracy. For both of these reasons, the new phenomenon¹ of 3D movies annoys me.

I will concede that in a narrow, technical sense, these movies are indeed 3D in that they do encode three spatial dimensions—that is, there is some information about depth encoded and presented. However, I don’t think it’s all that good, for various reasons, and would be more inclined to call it, say, about 2.4D.

Our eyes and brains use various cues for depth perception. The obvious ones that leap out at me, if you’ll excuse the pun, are

  1. Stereoscopic vision
  2. Focal depth
  3. Parallax

Let’s go over them with an eye (…) toward what movie makers, and other media producers, do, could do, and cannot do about it.

1. Stereoscopic vision

Odds are very good that you, gentle reader, have two eyes. Because these eyes are not in precisely the same location, they view things at slightly different angles. For objects that are far away, the difference in angle is very small. (Astronomers deal mostly with things at optical infinity, i.e. so far away that the lines of sight are effectively parallel.) For things that are very close, such as your nose, the difference in angle is very great. This is called stereoscopic vision and is heavily exploited by your brain, especially for short-distance depth perception, where your depth perception is both most important and most accurate: Consider that you can stick your hand out just far enough to catch a ball thrown to you, while you surely couldn’t estimate the distance to a ball fifty metres distant to within the few centimetres of precision you need.

“3D” movies, of course, exploit this technique. In fact, I think of these movies not as three-dimensional, but rather as stereoscopic movies. There are two or three ways of making them, and three ways I’m aware of to present them.

To create stereoscopic footage, you can…

  1. …Render computer-generated footage from two angles. If you’re making a computer-generated movie, this would be pretty straightforward.

  2. …Shoot the movie with a special stereoscopic camera with two lenses mimicking the viewer’s eyes, accurately capturing everything from two angles just as the eyes would. These cameras do exist and it is done, but apparently it’s tricky (and the cameras are very expensive). Consider that it’s not as simple as just sticking two cameras together. Their focal depth has to be closely co-ordinated, and for all I know the angle might be subtly adjusted at close focal depths. I believe your eyes do this.

  3. …Shoot the movie in the usual fashion and add depth information in post-processing. This is a terrible idea and is, of course, widely used. What this means is that after all the footage is ready, the editors sit down and decide how far away all the objects on screen are. There’s no way in hell they can get everything right, and of course doing their very very best would take ridiculous amounts of time, so basically they divide a scene into different planes of, say, “objects close up”, “objects 5 metres off“, “objects 10 metres off”, and “background objects”. This is extremely artificial.

All right, so you have your movie with stereoscopic information captured. Now you need to display it to your viewers. There are several ways to do this with various levels of quality and cost effectiveness, as well as different limitations on the number of viewers.

  1. Glasses with different screens for the two eyes. For all I know this may be the oldest method; simply have the viewer or player put on a pair of glasses where each “lens” is really a small LCD monitor, each displaying the proper image for the proper eye. Technically this is pretty good, as the image quality will be as good as you can make a tiny tiny monitor, but everyone has to wear a pair of bulky and very expensive glasses. I’ve seen these for 3D gaming, but obviously it won’t work in movie theatres.

  2. Shutter glasses. Instead of having two screens showing different pictures, have one screen showing different pictures…alternating very quickly. The typical computer monitor has a refresh rate of 60 Hz, meaning that the image changes 60 times every second. Shutter glasses are generally made to work with 120 Hz monitors. The monitor will show a frame of angle A, then a frame of angle B, then A, and so on, so that each angle gets 60 frames per second. The way this works to give you stereoscopic vision is that you wear a pair of special glasses, shutter glasses, which are synchronised with the monitor and successively block out every alternate frame, so that your left eye only sees the A angle and your right eye only sees the B angle. Because the change is so rapid, you do not perceive any flicker. (Consider that movies look smooth, and they only run at 24 frames per second.)

    There’s even a neat trick now in use to support multiplayer games on a single screen. This rapid flicking back and forth could also be used to show completely different scenes, so that two people looking at the same screen would see different images—an alternative to the split-screen games of yore. Of course, if you want this stereoscopic, you need a 240 Hz TV (I don’t know if they exist). And that’s for two players: 60 Hz times number of players, times two if you want stereoscopic vision…

    In any case, this is another neat trick but again requires expensive glasses and display media capable of very rapid changes. OK for computer games if you can persuade gamers to buy 120 Hz displays, not so good for the movie theatre.

  3. The final trick is similar to the previous one: Show two images with one screen. Here, however, we do it at the same time. We still need a way to get different images to different eyes, so we need to block out angle A from the right eye, &c. Here we have the familiar, red/green “3D” glasses, where all the depth information is conveyed in colours that are filtered out, differently for each eye. Modern stereoscopic displays do something similar but, rather than using colour-based filter, instead display the left and right images with different polarisation and use polarised glasses for filtering. This reduces light intensity but does not have the effect of entirely filtering out a specific part of the spectrum from each eye.†

To summarise, there are at least three ways to capture stereoscopic footage and at least three ways to display it. Hollywood alternates between a good and a very bad way of capturing it, and uses the worst (but cheapest) method to display it in theatres.

2. Focal depth

All right, lots of talk but all we’ve discussed is stereoscopic distance. There are other tricks your brain uses to infer distance. One of them is the fact that your eyes can only focus on one distance at a time. If you focus on something a certain distance away, everything at different distances will look blurry. The greater the difference, the blurrier.

In a sense, of course, this is built into the medium. Every movie ever shot with a camera encodes this information, as does every picture shot with a real camera—because cameras have focal depth limitations, too.

The one medium missing this entirely is computer games. In a video game of any sort, the computer cannot render out-of-focus things as blurry because, well, the computer doesn’t know what you are currently focussing on. It would be very annoying to play a first-person shooter and be unable to make out the enemy in front of you because the computer assumes you’re looking at a distant object, or vice versa. Thus, everything is rendered sharply. This is a necessity, but it is a necessary evil because it makes 3D computer graphics very artificial. Everything looks sharp in a way it would not in real life. (The exception is in games with overhead views, like most strategy games: Since everything you see is about equally distant from the camera, it should be equally sharp.)

Personally, however, I have found this effect to be a nuisance in the new “3D” movies. When you add the stereoscopic dimension to the film, I will watch it less as a flat picture and more as though it truly did contain 3D information. However, when (say) watching Avatar, looking at a background object—even though stereoscopic vision informs me that it truly is farther away, because my eyes receive the same angle—does not bring it into focus.

This may be something one simply has to get used to. After all, the same thing is in effect in regular movies, in still photography, and so on.

Still, if I were to dream, I should want a system capable of taking this effect into account. There already exist computers that perform eye-tracking to control cursors and similar. I do not know whether they are fast enough to track eye motion so precisely that out-of-focus blurring would become helpful and authentic rather than a nuisance, but if they aren’t, they surely will be eventually. Build such sensors into shutter glasses and you’re onto something.

Of course, this would be absolutely impossible to implement for any but computer generated media. A movie camera has a focal distance setting just like your eye, stereoscopic or not. Furthermore, even if you made a 3D movie with computer graphics, in order to show it with adaptive focus, it would have to simultaneously track and adapt to every viewer’s eye movements—like a computer game you can’t control, rather than a single visual stream that everyone perceives.

3. Parallax

Parallax refers to the visual effect of nearby objects seeming to move faster than far-away ones. Think of sitting in a car, watching the light poles zoom by impossibly fast, while the trees at the side of the road move slowly, the mountains only over the course of hours, and the moon and stars seem to be entirely fixed. Parallax: Because nearby objects are close to you, your angle to them in relation to the background changes more rapidly.

Of course, in a trivial sense, every animated medium already does capture this; again, it’s not something we need stereoscopic vision for. However, at close distances, a significant source of parallax is your head movement. A movie can provide a 3D illusion without taking this into account…so long as you sit perfectly still, never moving your head while a close-up is on the screen.

As with focal depths, of course, this is viewer-dependent and completely impossible to implement in a movie theatre. However, it should be eminently feasible on home computers and game systems; indeed, someone has implemented headtracking with a Wii remote—a far more impressive emulation of true three-dimensionality than any amount of stereoscopic vision, if you ask me.

Combined with eye tracking to monitor focal depth, this would be amazing. Add stereoscopic images and you’d have a perfect trifecta—I honestly think that would be the least important part, but also the easiest (the technology is already commercially available and widespread), so it would be sort of silly not to add it.

Thoughts

After watching a “3D” movie or two, I have come away annoyed because I felt that the stereoscopic effect detracted rather than added. Some of this is doubtless because, being who I am, the hyped-up claim that it truly shows three dimensions properly² annoys me. Some of it, however, is a sort of uncanny valley effect. Since stereoscopic vision tantalise my brain into attempting to regard these movies as three-dimensional, it’s a big turn-off to find that there are several depth-perception effects that they don’t mimic at all. If a movie is not stereoscopic, my brain does not seem to go looking for those cues because there’s no hint at all that they will be present.

Of course, it may just be that I need to get used to it. After all, “2D” movies³ already contain depth clues ([limited] parallax, [fixed] focal depth differences) without triggering any tendency to go looking for more. I haven’t watched a lot of stereoscopic imagery, and perhaps my brain will eventually learn to treat them as images-with-another-feature. For now, however, adding stereoscopic information to productions that can’t actually provide the full 3D visual experience seems to me rather like serving up cupcakes with plastic icing: It may technically be closer to a real cupcake than no icing at all, but I prefer a real muffin to a fake cupcake.


¹ It’s at least new in that only now are they widely shot and distributed.

² Technically all movies do depict three dimensions properly, but these new ones are really looking to add the fourth dimension of depth to the already-working height, width, and time.

³ Which are really 3D; see above.

† This should not have needed to be pointed out to me, as I have worn the damned polarised things, but originally I completely forgot them and wrote this as though we still relied on red/green glasses. Thanks to [livejournal.com profile] chutzman for the correction.

haggholm: (Default)
2009-06-12 07:53 pm
Entry tags:

Reason and error

A reasoned belief is one that is founded on empiricism and a logical argument. Hopefully, we’ll all agree that logic is sound. If you argue that logic doesn’t work, then there’s no point in discussing anything at all with you, because no chain of reasoning can—well, reasoning depends precisely on logic! Thus, I will presuppose that we agree on logic, though you may or may not agree that empiricism is necessary, and some would even claim that empiricism is not epistemologically sound.

First, let me define what I mean by empiricism (I am no philosopher; there may be more precise terms). I do not mean that what I see is necessarily reality (au contraire, I am well aware that our senses are flawed and our brains are prone to certain types of delusion). What I mean by empiricism is simply the following assumption: There exists a systematic relationship between external reality and the percepts of a healthy brain. I must define the brain as healthy: If it is not, it may not follow logic, and it may be plagued by hallucinations to the point where it cannot follow any sort of external reality. If so, alas, I posit that this brain is beyond help. It is not, I admit, impossible that this applies to any given brain, including my own; but absent evidence to this fact, it cannot serve me to believe it or to behave as though it were true, so I will assume that the percepts in my brain do systematically reflect an external reality. I do not, however, need to assume that the relationship is perfect—strictly speaking, all I need is statistical significance.

If I am allowed to assume both logic and empiricism (in the sense above), I can build up a consistent and coherent world view. It doesn’t matter (in principle) that the system is noisy—that some of my logic will be faulty and some of my perceptions incorrect. The assumptions suffice to formulate experiments, which allow me to verify my logic against observed reality, and cross-check my perceptions as much as I want. Repeated experiment lets me overcome the effects of noise in both argument and perception.

I will even take a controversial step and claim that logic needs empiricism for validation—the two cannot be extricated from each other. You cannot, after all, use logic to prove that logic is true—it’s circular (it only works if logic is true to begin with). If you are mathematically inclined, you may note that logic can be represented as a form of mathematics—I wonder if perhaps Gödel’s Incompleteness Theorem can provide a formal version of this verbal argument?

In any case, empiricism supports logic. The reason is as follows: If you assume both empiricism and logic, you can formulate experiments so that, given percept A, you can make a statistical expectation on percept B. This, however, presupposes logic. If we don’t have logic, we have no reason at all to suppose that B will follow A with any degree of certainty. Because we can empirically observe that experiments do bear out, this supports the logical reasoning that we used to make the predictions.

Of course this is far from iron-clad (and even in its weak form does also presuppose logic), but then we can’t really expect too much of an argument that tries to provide evidence for logic itself, now can we?


Having explained why I think that empiricism is a necessary assumption to make any sense of the world whatsoever, I suppose I should mention—however briefly—why I dismiss alternatives. The most obvious alternative is solipsism, the notion that none of the external world has any reality to it and all you can really know is your own mind. That’s not exactly nonsensical, but it’s not worth considering because it tells you nothing—it won’t get you anywhere. It provides no epistemological framework useful for interacting with anything (if everything you interact with is in your own head, why expect it to behave systematically?). It provides no reason to take logic seriously. It allows you no conclusions.

And, quite frankly, I think that all systems that reject empiricism and scientific thinking suffer of different degrees of the exact same thing. What you claim to intuitively know I may very well intuitively doubt, and if we are to settle it independently—well, we need logic and empiricism. If you claim that reality is somehow subjective and depends on your point of view, that your reality is not necessarily the same as mine, we lack a framework to interact, and it is self-defeating because you have no standing to declare that my view of reality as objective isn’t right (if you do so declare, you are making a distinctly universal and objective claim).


A logical argument, in its most basic form, looks like AB; A; ∴B. In English: “If A is true, then B must be true; A is true; therefore B is true.” A and B are both propositions, roughly “truth claims”. A is the premise. AB is the inference that drives the argument. B is the conclusion. Now, there are four ways to be wrong:

  1. You believe in proposition B without any logical or empirical reason. This is just silly.
  2. Your premise is correct (A really is true), but your argument is not validA doesn’t necessarily imply B.
  3. Your argument is valid, but not sound: Your premise, A, is not actually true.
  4. Your premise is false and your argument is invalid.

Note that it is quite possible to go from false premises to a true conclusion, or true premises to a true conclusion via an invalid argument. Reaching a correct conclusion is not proof of sound thinking!

The point of this discussion is that if once you believe in a set of premises and in a conclusion, it’s pretty easy to overlook flaws in the inference. If I know I believe B because A is true, and nothing occurs to gainsay either A or B, I’m not likely to revisit the inference AB with a very critical gaze, because clearly, it worked. However, this is not a reasonable thing to do if this argument is my only reason for believing in B—and since I may have made a mistake in any argument, I should try to be critical of all of them (it may not be my only reason for believing something, but the other reasons may be unsound arguments, so I should treat each one as important). To me, critical thinking lies in scrutinising the premises, but especially of watching inferences very carefully. I pay less attention to conclusions (in a debate, I am unlikely to attack them), because they will flow naturally from the argument if once a sound argument is established.

haggholm: (Default)
2009-05-27 09:07 pm
Entry tags:

“How extremely stupid not to have thought of that!” —Darwinian evolution as an armchair deduction

T.H. Huxley famously said that it was extremely stupid not to have thought of Darwin’s theory of evolution by natural selection. I wouldn’t go quite that far—for several thousand years, countless extremely smart people consistently failed to think of it, until Darwin and (independently) Wallace did so—but I do agree that it is, at least in hindsight, a lot more obvious than those millennia might have us believe.


Darwin formulated his theory after many years of incredibly extensive and meticulous collection of biological facts. This is invaluable for three reasons—it’s persuasive, it provides detail, and of course it’s necessary to validate it as an empirical science—but I believe that the broad strokes of evolution by natural selection can be deduced from armchair reasoning, armed only with a few basic, biological facts—at least for bisexually reproducing organisms, to which area I shall restrict myself below. Perhaps someone will find that persuasive, and in any case I find the idea interesting enough to consider.

In brief, I believe that we need the following ideas:

  • Phenotypic (here, very roughly “biological”) traits are inherited.
  • Inheritance of non-acquired traits (trivially observed and deduced).
  • The particulate nature of inherited traits (trivially observed and deduced).
  • Traits are subject to random variation (long known).
  • The Malthusian Argument (accessible to armchair speculators)
  • The knowledge of an old Earth

Let’s go through them one by one and see if you agree with my assessment.

First, phenotypic traits are inherited. This has been known for thousands of years—like begets like.

Second, we need the inheritance of non-acquired traits, which I claim is trivially observed. Note (beware of converse error) that I am not saying that it’s obvious that no acquired traits are inherited (and while the “Lamarckian” view of evolution is clearly wrong, there are some things and some senses in which acquired traits can be inherited—by and large, they are not genetic). I’m just saying that some non-acquired traits are clearly inherited. I’m going to call this the Grandparent Argument: Children can inherit traits that clearly run in the family even though such traits may skip a generation. If I share a trait with my grandfather that my father didn’t share, and if this correlation is strong enough to be reliably observed (at some statistical frequency), then the trait can’t be simply acquired, since neither of my parents had it. Yet even though neither parent had it, it was passed down the family: Ergo, there must be something passed down in the blood (actually, of course, the genes).

The particulate nature of genetics is the observation that mixing traits is not just a matter of blending. Let me define my terms: Blending here means that the result is the intermediate of the inputs. If I mix dark blue and light blue, I get medium blue. If I mix “genes for tallness” with “genes for shortness”, I get a person of intermediate height. This was seen as a major problem for Darwin’s theory, once Mendel’s work on genetic became well-known. However, it is, or should be, completely obvious that not all traits work like that. As Dawkins points out, the most obvious way of all is to note that the offspring of a man and a woman is (in a vast majority of cases) a boy or a girl, not an intermediate between the sexes. Other traits are equally obvious; so the children of a blond and a black-haired parent may be either blond or dark, but never intermediate in colour.

Some traits appear to be “blended”. This is because there are lots of “on-or-off” gene “switches”; if my mother has 100 “on” switches for tallness, and my father has 100 “off” switches for tallness, I may end up with 50 “on” and 50 “off” switches and thus, intermediate height. Still, each gene is either the one thing or the other. Genetics is, as Dawkins says, digital. This isn’t exactly obvious—beware again of converse error: I claim that it’s obvious that particulate traits are inherited, not that it is obvious that no “blended” traits are ever inherited.

As for random variation, it’s not hard to deduce from looking at nature—whether by freak mutations, or by observing that minor new traits appear in populations of animals long and well observed, especially populations kept fairly homogenous, such as any kind of livestock, or Darwin’s pigeons.

The Malthusian argument can be very approximately paraphrased as follows: Animals tend to produce more offspring than is necessary to maintain the balance of a population. For instance, half of all geese hatched are female, and each female goose tends to have more than two offspring in her lifetime. Thus, in an ideal population with unlimited food and no predators, the population will increase geometrically—the next generation will be bigger by some factor X (e.g. twice as big); the generation after that, X times bigger again (e.g. twice as big again, or four times the original)—and so on. However, because resources are limited, this can’t go on indefinitely (or there would be infinitely many geese). Therefore, many animals die, and only some survive.

I will gloss fairly rapidly over the existance of an old Earth. Cultural background provides different views on this—some cultures have viewed the world as eternal; some as created within a finite history. In a backdrop of Christian creationism, it is perhaps not obvious that the Earth is in fact several billion years old, and this may have been a big part of the reason why Western science didn’t figure out evolution sooner than it did. Certainly, Darwin and Wallace followed fairly closely in the footsteps of geologists who discovered that the Earth was at the very least many millions of years old. Suffice to say that we now do know, through radiometric dating etc., that the Earth is very old (there’s lots of good evidence for, and no good evidence against this theory), and even if people before Darwin couldn’t account for that from their armchairs, we can.


Hopefully we can roughly agree on all of the above. My claim is that with this alone, we should be able to hash out a rough view of evolution by natural selection. It goes as follows:

Because we have the Malthusian argument, we know that not animals survive. If an animal has 20 offspring over its lifetime, and its population remains roughly constant, on average less than two of those offspring will survive to leave offspring behind (two, because the animal and on average one mate must be accounted for; less than two because generations overlap). Thus, >(18/20), or >90% of all these animals die without any offspring.

We also know that they vary a little—through random variation, no two individuals are exactly the same (almost every human on Earth carries genetic mutations—you are almost certainly a mutant, dear reader!). Now, by a trivial survival of the fittest argument we can see that who survives and who dies without heirs won’t be completely up to chance. Of course, there’s an element of luck in it. Some animals may be struck by lightning. Sometimes, who gets eaten and who gets away is down to luck.

But “chance” doesn’t differentiate between traits (that’s the point!), and some traits produce some benefit. Maybe the animal that runs slightly faster is better at avoiding predators. Maybe the one with slightly sharper eye-sight is so much better at spotting prey. The difference may be tiny, but there is a difference, and as the experience is repeated over many millions of years, we can see that in the long run, because phenotypic traits are inherited, a beneficial trait will help a “family line” to prosper. (Let me reiterate, because I have seen people have difficulty with this, that random chance does not cancel out these benefits. It’s true that lightning may strike a fast animal as easily as it may strike a slow one…but no more easily. Random chance doesn’t favour the “negative” traits, so there’s still a benefit!)

One argument that was levied against this reasoning about a hundred years ago is that continued breeding will tend to blend a trait into the population, but as we established earlier, inheritance of traits is particulate in nature, so this doesn’t really happen. It should also be noted that because there’s some “particulate pseudo-blending” going on, as also mentioned above, most dramatic evolutionary change happens in isolation—e.g. when a small population (perhaps a pair of animals) is separated from the rest (e.g. insects or birds blown by wind to an island, a herd of deer isolated by a new river…).

We haven’t even gone through my list of basic arguments yet, and already we’ve started to deduce the existence of population genetics.

We’re now ready to show that evolution must be “Darwinian”—Lamarckianism isn’t enough. If you’ve agreed with me all the way, you’ll already see that it plays a part. A second angle on it is to observe that many acquired traits are obviously not inherited—if you lose a limb, or grow a beard, your child will not inherit it. (Even aphids and stick insects, clones of their mothers, inherit the usual number of limbs from maimed ancestors). A third strike against it is that while Lamarckian evolution could (if true) explain some fairly crude traits (stronger arms, longer giraffe necks, etc.), it provides no explanation whatsoever for novel traits—the first ancestor of the eye, a light-sensitive patch, is a novel trait with potential benefits (aid in regulating circadian rhythms, detections of predators who block light sources, etc.), but cannot be explained by Lamarckianism. There must be inheritance of non-acquired traits for evolution to work, and we have already seen that it does.

The armchair argument can’t tell us that nothing Lamarckian happens, but it can and does tell us that Darwinian evolution must happen, and if Lamarckianism is true at all, it’s only in conjunction therewith.


…And so, hey presto!, we have deduced a very rough outline of Darwinian evolution with only the most basic of facts and observations, quite without the reams of data that Darwin had. Of course it really is extremely rough: We have a crude Darwinian theory, we assuredly do not have Darwin’s!—let alone modern evolutionary theory, which has a century and a half’s worth of evidence, further discoveries, statistics, integration of genetics, and roughly a gazillion other advances on what Darwin, however brilliant, had figured out. I’m sure that a lot of little holes can be poked in it, but I am after all presenting an armchair sketch, not a waterproof argument if you want that, read peer-reviewed books with peer-reviewed references (I purposely avoid giving any links or references precisely to emphasise the armchair nature). As such, I’m not going to bother refuting a lot of the usual arguments against evolution—apart from the fact that most of them are silly, this is just meant to be a rough, quick, and comprehensible (not comprehensive!) positive argument for.

There is one I want to mention briefly, though (apart from the Old Earth bit already mentioned above): It might very easily be argued that my argument lays out a good case for “microevolution”, and someone might seize upon this and exclaim that I haven’t proved “macroevolution”. To this I reply that the distinction between “microevolution” and “macroevolution” is spurious. It is true that biologists may use these terms, but that does not mean that they are distinct categories. “Macro” and “micro” just means “big” and “small”, respectively, and the “two forms” of evolution are the same, unless you can demonstrate that there is an upper limit to what “microevolution” can cumulatively achieve. Otherwise, you may as well agree that a man can walk fifty miles (with rest stops) but refuse to admit that he could possibly walk a thousand, because the former is “microwalking”, which has been observed, wheras the latter is “macrowalking”, which hasn’t and is impossible.

I hope you have found this at least evocative. Please let me know if there are any big, glaring gaps—but leave the little, niggling gaps for Talk Origins or for another time.

haggholm: (Default)
2009-04-14 04:01 pm

The “Big Pharma” Gambit

In areas like alternative medicine and the anti-vaccine movement, one argument that is frequently brought up is that the status quo looks the way it does because Big Pharma is suppressing the argumentor’s favoured research results—they suppress evidence that vaccines cause health problems because they are greedy and want to make money from vaccines even if harmful; they suppress evidence that homeopathy works because they are greedy and do not want to lose the drug market to homeopaths…

First, let me state quite baldly that I firmly believe that the large pharmaceutical companies fall pretty squarely in the Big Evil Corporation category and frequently engage in questionable or reprehensible behaviour. Certainly many of their executives are as motivated by greed, and as ruthless, as executives of other Big Evil Corporations, like oil companies. I do not dismiss out of hand claims that Big Pharma are doing questionable things. But obviously, that doesn’t mean that they are guilty of all the evils of which they are accused, and we have to look at the actual claims, and corroborating evidence, in order to figure out what’s what.

Frankly, I find the vaccine claim outright puzzling. I have taken vaccines for a number of different things. Every vaccine shot I have taken, even ones I had to pay for entirely out of my own pocket, have cost less than or around $50. Assuming that a vaccine requires one booster shot, that represent a sales potential of $100 for my entire lifetime, a statistical 80 years or so. That’s not a lot of profit.

On the other hand, if vaccines are not available for a disease, the disease has to be cured and controlled. Consider polio, a disease eradicated by vaccines:

There is no cure for polio. The focus of modern treatment has been on providing relief of symptoms, speeding recovery and preventing complications. Supportive measures include antibiotics to prevent infections in weakened muscles, analgesics for pain, moderate exercise and a nutritious diet. Treatment of polio often requires long-term rehabilitation, including physical therapy, braces, corrective shoes and, in some cases, orthopedic surgery.

Portable ventilators may be required to support breathing. Historically, a noninvasive negative-pressure ventilator, more commonly called an iron lung, was used to artificially maintain respiration during an acute polio infection until a person could breathe independently (generally about one to two weeks). Today many polio survivors with permanent respiratory paralysis use modern jacket-type negative-pressure ventilators that are worn over the chest and abdomen.

Suppose that some utterly ruthless Big Pharma executive sits down and does the math on this. We can either sell $100 worth of vaccines to quite a lot of people, but once only per patient lifetime…or, if we make no vaccine available (or allow it to be banned due to spurious health concerns) we can sell antibiotics, analgesics, braces, corrective shoes, surgical equipment, iron lungs…

I don’t claim to be an expert on the market, but I postulate that vaccines just aren’t big money makers compared to after-the-fact treatments, and obviously vaccines compete with curative and palliative drugs. As someone said in agreement with my opinion,

I had a friend working as an assistant on big pharm sponsored vaccine research project back in the early 90s. The pharm company eventually pulled funding for the research, and the researchers suspected that the motivation was that producing drugs to treat the illness in question was a better moneymaker than funding relatively expensive research to develop a vaccine. The vaccine would have essentially killed a bunch of the company's product lines.

Let me make this very explicit: This quote is pure anecdote and is not intended to be used as evidence, but presented as an example of why my argument is plausible. Nor do I have the market research and relative cost/profit analyses for vaccines versus conventional drugs. However, my point is that in order for the “Big Pharma” conspiracy theory to hold any water at all, this argument has to be addressed. In short, conspiracy theorists who view vaccines as poisoning for profit must believe that

  1. Big Pharma executives are so ruthless and greedy that they are willing to poison millions of children (including their own) for money;
  2. They do this, and get away with it with no sign of internal whistle-blowers (the critics are always outside critics, with no sign of leaked memos as is usually the case in attempts at corporate cover-ups); and
  3. Vaccine production is so profitable that even after R&D costs, it earns the company more money than selling curative and palliative treatments.
…And if they wish to be believed, they have to substantiate that.


Similar claims are often raised by supporters of “alternative” medical treatments like homeopathy and naturopathy. It’s not quite so sinister—they tend to accuse Big Pharma not of mass poisoning campaigns, but merely suppressing their own (surely superior) treatments for profit.

Once again, however, these economic accusations are very fast and loose and vague. Even if homeopathic remedies worked, would it really profit Big Pharma to suppress it? I would rather imagine that they would attempt to take over that market and drive the smaller players out. Simply by pushing for increased regulation (requiring similar standards of evidence of effectiveness and safety for “alternative” drugs as for conventional ones), they would kill a lot of companies that lack the R&D resources to run the necessary studies. (Why don’t they do it already? Well, since these treatments don’t work, the studies would never pass muster.)

Do I know that this is the way the finances would work out? Absolutely not! But the careful evasion of even raising the question makes me think that the alt-med advocates would rather no one think it through—it’s much easier, after all, to go with a knee-jerk Big corporate evil! reaction. There’s no reason to take the greed accusation seriously unless it can be shown to be logically coherent.

This argument has a second irony, of course: Alternative medicine is a huge industry. Billions upon billions of dollars are spent on “alternative” treatments every year—without all the R&D costs that real pharmaceutical companies have to battle with; freed of the expenses and vast time commitments of running large-scale, double-blind medical trials to show that the drugs work. Tu quoque is a logical fallacy, but when the argument amounts largely to character assault (Big Pharma is greedy and evil), it may be worth keeping in mind that “alternative medicine” is no more innocent of the character flaw at hand.

haggholm: (Default)
2009-04-13 12:57 pm
Entry tags:

The Secret, or the Law of Attraction

Someone on a forum did me the distinct disfavour of posting the first 20 minutes of the trainwreck film, The Secret, where the “secret” refers to the “Law of Attraction”. Briefly, the idea is that thinking about things will cause them to come about—think about the bad things in life and bad things will happen to you; think about good things and they will happen instead. I’m not going to waste time and space talking about why this is preposterous. What motivates me to write this is rather my anger at this, and what I consider to be the harmful consequences.

Lots of people actually seem to believe in this crap. To some extent, that isn’t too surprising. The facile reasons are, first, that it certainly fits in with a lot of New Age magic; second, that the testimonials look good (the happy supporters they choose to speak out really are happy—of course they are, living in $4.5 million mansions…); and third, it is endorsed by highly visible and respected idiots, like Oprah.

More importantly, however, it ties in very neatly with things that are actually true.¹ Of course positive thinking tends to improve your life in many ways—it’s a well established psychological fact that acting happy tends to make you happier; happiness and confidence improve your interpersonal skills and relations; avoiding focusing on negative things frees you from brooding over misfortunes. None of this validates the “secret”. The fact that your mental attitude is connected to your mental state is painfully obvious, and a positive demeanour improving interpersonal relations (and through that avenue, your life) is only evidence that people respond better to happy, confident people than to sad or aggressive ones, and does not require the existence of some mysterious universal energy found by viciously abusing quantum physics.

All right, then, some hypothetical person might ask, what is the harm? It may be silly, but if it motivates people to engage in positive thinking, which you freely acknowledge is a good thing, then why should we discourage this stuff?

Apart from the fact that I am as dedicated as I am able to pursue truth, and consider it morally valuable in its own right, I do think that this silliness has a very sinister side.

The first and most obvious problem is that when people put their trust in anything that doesn’t actually work, there is a risk that they will eschew real, working solutions because they think they already have one. For instance, the 20-minute clip from The Secret has someone claiming: I’ve seen cancer dissolved.

Let me reiterate that. The Secret strongly implies that positive thinking can cure cancer.

That is when it ceases to be funny. People who swallow this whole are lead to believe that positive thinking suffices to cure cancer. This misinformation can kill. Nothing cures cancer like surgical steel (preferably with chemotherapy and/or radiation therapy as adjuvant therapies to prevent recurrence). Failing to seek proper help can kill you, painfully and horribly.

And, of course, we can extrapolate this to any other medical condition, or for that matter, any other problem.

The second repugnant consequence of this belief in the “Law of Attraction” is that while the film-makers focus strongly on the empowering effect, when positive thinking is believed to change your life, the explicit corollary is that negative thoughts lead to bad things happening. They make this very clear: These people assert not only that negative things will make bad things happen, but that whenever bad things keep happening to you, it is because you are thinking negative thoughts. It’s under your control, they say, and you have the power to change it—but if events are bad, you caused them to happen.

We know, of course, that this is bunk. However, those who believe it are also made to believe that all their misfortunes are their fault. If your house burned down, if you developed cancer, if you were raped—according to the makers of The Secret, this is your fault: You made it happen. This is not only nonsensical, it is also an extremely cruel thing to allege.


¹ There is a parallel here to the view of some bloggers, such as “Orac”, of “complementary and alternative medicine”, which are perceived to usurp some actually valid ideas, like nutrition and exercise: CAM practitioners prescribe good nutrition, exercise, and homeopathic remedies; good nutrition and exercise are clearly good for your health; therefore homeopathy must be good—stated so baldly, the intellectual bankrupcy of the notion is obvious.