haggholm: (Default)

To change one’s mind when presented with sufficient evidence is a hallmark of a rational person. This is the ideal of the scientific method, and the failure to pursue it is the bane of human rationality. We are burdened with various cognitive biases and shortcomings that make all us humans naturally bad at it: We tend to seek out observations that confirm our beliefs and credit them when we find them; we tend to be more critical and skeptical of observations that contradict what we believe to be true. I often speak at length about this, criticising others when they insist in the face of evidence.

So what about me, then? When do I change my mind?

I must regretfully admit that I can’t think of a great many examples. Probably no small part of this is due to the fact that no one, however much they may appreciate the importance of evidence and perniciousness of cognitive biases, is actually immune to those biases. I do my very best to re-examine my beliefs when rationally challenged, but I suspect that every one of us carries a great many beliefs obtained for irrational reasons that, correct or incorrent, we just never come to critically re-examine. As a child you were taught a thousand thousand things, and as a child you had no choice but to absorb them, no framework for critical evaluation. Probably you will not re-examine all of those beliefs in your entire lifetime.

I’d like to think that another significant part of this is that I try not to form beliefs without a rational basis. I like to think that I rarely say anything that is flat-out wrong, because I try to avoid making claims that I’m not confident about. Maybe there’s something to this—I hope so—but no one is infallible; I am inevitably wrong about some things, ergo there must be beliefs I ought to change, but have so far failed to.

Maybe the most obvious example of an area where I have changed my mind is religion, but it seems kind of trivial. It was only as a child that I was capable of blind faith, the conviction of things not seen; I grew up and grew out of it when I realised that there just wasn’t anything supporting it, and I was firmly atheist long before my voice changed. For a long time I held the curious faitheist position that although it’s mistaken, it’s still somehow noble and worthy of respect to have committed faith; I have changed my position here too, recognising that holding irrational beliefs is inherently bad (and in fact intellectually a much worse crime than happening to reach erroneous conclusions). But all that is rather trivial; the total dearth of supporting observations makes it childishly easy to discard.

A much more recent, complicated, and difficult belief was upset some time last year or the year before, when I first started reading and learning how little parents matter to the personalities of their children. Steven Pinker summarises it in this video; the gist of it is that for most behavioural metrics,

  • up to 50% of the variation in the trait is genetic;
  • 0%–10% of the variation is due to parenting/upbringing;
  • the rest is due to culture, peer groups, &c.

This is illustrated by facts such as

  • adoptive siblings are hardly more similar than people picked at random;
  • monozygous twins reared apart by different parents tend to have very similar personalities, even if they are raised in very different environments and never meet.

I found this surprising. Indeed, if a fact could be offensive, this would be pretty close to it. Parenting doesn’t matter? Intuitively this makes roughly no sense at all to me. My parents matter intensely to me. Surely they shaped me? I can identify many, many traits, beliefs, and tendencies that correlate incredibly well with my parents. For better or worse, I think of myself as very much my father’s son, and I share many of his strengths and weaknesses. I have the same intellectual bent that he had, and many of the same interests.

And I value my parents. My father was a very flawed man, but he was always good to me, I got along well with him, and I loved him in spite of all his many flaws. My mother is wonderful, and I often consider myself very lucky in that she is so accepting, so ready to have a grown-up parent/child relationship with me, even when we deeply disagree on things. The notion that their influence on me was much less than I had thought seems…disparaging.

But the fact of the matter is that surprising and counterintuitive though it may be to me, that doesn’t alter the truth one whit, and I know damn well that intuition does not trump evidence. There are various studies on the subject, and I gather many are summarised in The Nurture Assumption by Judith Rich Harris, which I really ought to read at some point… If the evidence contradicts my intuition, then I should discard my intuition, not the evidence.

There are also perfectly good explanations for the observed correlations under the working theory above. Of course I resemble my parents in many respects: I share 50% of my genetic material with each of them, and just as I look quite like my father did when he was young, demonstrating that he contributed to my visible phenotype, so he surely contributed to my behavioural phenotype, as well. And while I wasn’t brought up in quite the same environment as my parents were, still there were surely similarities.

Additionally, I can think of hardly anything more conducive to confirmation bias than an informal analysis of a child’s resemblance to its parents. Of course I can think of commonalities: After all I spent eighteen years living in the same house as my parents, and had extremely ample time to learn just what traits and behaviours I shared with them.

Finally, I think that the deep personality traits that psychologists measure—agreeability, neuroticism, and so on—are probably less tangible, less open to obvious observations, than more superficial behaviours. It’s surely true that I read Biggles books as a child because my father had done so when he was a boy, had saved the books, read them aloud to me for a while. But this is a very superficial behaviour compared with whatever personality traits make me someone who enjoys shutting himself in with a book.

Of course, all of this is just reinterpreting old data in a new framework: Take the observations I made under the paradigm of “I am this way because parenting so made me”, and reinterpret them under the paradigm of “Parenting doesn’t matter nearly so much; genes and social environment are more important”. This is a fine thing to do, but were I unable to account for these data, still I should have to bow to the evidence: My personal, anecdotal observations do not trump the data.

I should add that I am not convinced that no kind of parenting can have fundamental, important effects. I vaguely seem to recall reading, and at any rate I have seen nothing to contradict this belief: That a truly poor environment, such as abusive parents, can have deep and terrible effects on a child. I do not base this on any real data, so I will not vouch for its truth at all, but until I read otherwise this is my working hypothesis: Terrible parents can psychologically damage their children and have disproportionate influence, for the worse. Parents who aren’t terrible, though, have surprisingly small effects on personality, and while a good parent is a very different creature from a terrible one, the differences in outcome vary surprisingly (disappointingly!) little between mediocre, good, and great parents.

Here, though, more data are needed.

(You may protest that people who are particularly good and responsible tend to have children who grow up to be particularly good and responsible. To this I say: Recall that these are people who may be genetically predisposed to be particularly good and responsible, and with up to 50% heritability in most personality traits, it’s no wonder if that is passed down.)

haggholm: (Default)

An argument that often comes up two very different contexts—theist vs. atheist debates, and discussions of the Everett Many-Worlds/Multiverse interpretation of quantum mechanics—is the argument from Cosmic Fine-tuning. In a nutshell, this argument goes something like this: The physical constants of our universe, such as the strength of gravity, the nuclear forces, electromagnetism, inflation, and so forth, all have specific values. If these values were only very slightly different, life as we know it—or life, period!—could not exist, e.g. because atoms could not cohere as molecules, or because matter would not congregate in stars and planets, or because all matter would collapse into black holes, &c., depending on just how these cosmic constants were altered. Thus, the constants are “fine-tuned” to support life.

The discussion can then veer off onto different trajectories, such as “ergo God” or “ergo an Everett/Wheeler multiverse of universes with differing parameters where we, via the anthropic principle, clearly inhabit one that supports life”, but let’s stop at the basics. I’ve long been uncomfortable with it, and I would like to tell you why—but unfortunately the commenter eric, here, already did so better than I can.

Before quoting, I will summarise, paraphrase, and interpret his comment (and my thoughts) as follows: The fine-tuning argument is incomplete without a means to estimate the probabilities of the parameter values; and being incomplete, it is not useful. It’s all very well to say “Wow, G is exactly 6.67384E-11 N(m/kg)²! That’s incredible!”, but what you’re missing is a recognition of the important question, “What are the odds of that?”, let alone an answer to that question. To pull four cards at random from a deck of cards and see four aces looks prima facie remarkable—but is truly so only if you know for a fact that the deck isn’t all aces.

For more depth, read what eric said in response to another commenter:

That if , for example, gravity was a few percent more little if anything would escape nava explosions, if it was a little less we would be unlikely to get hot starts forming. And so on with all the 5 forces

I answered that in @26 (albeit I didn't refer back to your post). Those fine tuning calculations only look at one constant at a time, so they are determining nothing more than a lower limit on the probability of the universe.

Now estimates of the probability of these variables must currently be imprecise but however you cut it the odds against them all must contain a fair number of zeros.

Our estimates aren't imprecise, they are totally without foundation. To show that, lets look at G in standard units as an example. For reference: 6.67384E-11 N(m/kg)^2.

The first question we ask is, what is the range of values it could take in other universes? From 1 to 1E-20? From 1E100 to 1E-100? From 7E-11 to 6E-11? From 6.673840E-11 to 6.673850E-11? We don't know. We don't have a clue. And yet the probability of it being 6.67E-11 is heavily dependent on what the range is. Right? The fine tuners assume that it can have any value. While we have to make some assumption about it, I think you will agree with me that that assumption leads to the smallest possible estimate of probability. So just considering this one constant, and one fact about it (range), we've already got a circularity problem going with our fine tuning argument: a low probability is not just a result, its also a premise! Its built into the assumptions on which the fine tuners build their argument.

The second question we ask is, what's the distribution of probabilities across values? Here the fine tuners are actually on a bit better ground: with no information on that whatsoever, it's reasonable to assume that every value is equally likely, which is what they do. But we need to keep in mind that an assumption made because we are completely ignorant of what the ground truth is, is all this is. If the probability of any given value is not flat but follows (for example) a gaussian distribution centered near our G, then even with a many-order-of-magnitude range of allowable values, the probability of getting our G might be reasonably high.

So what we really need is a model of how universes form, to see whether the constants can vary freely at all, and if so, over what ranges, and whether they vary independently or in some dependent manner. Only then can we know whether our constants are a remarkable coincidence or just an inevitable state of being. Speaking for myself (after all, I don’t know eric from adam, as it were), I am not ruling out the idea that the cosmic constants may be somehow remarkable (and an Everett–Wheeler multiverse seems like a sensible solution to that puzzle), but as of right now, the fine-tuning argument is too incomplete to be acceptable as an argument for anything.

haggholm: (Default)

The Second Law of Thermodynamics (“entropy always increases [in a closed system]”) is one of the most solid and accepted principles in any science anywhere, ever. Sir Arthur Eddington, no slouch, famously said that

The law that entropy always increases, holds, I think, the supreme position among the laws of Nature. If someone points out to you that your pet theory of the universe is in disagreement with Maxwell's equations — then so much the worse for Maxwell's equations. If it is found to be contradicted by observation — well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.

However, “entropy always increases” is a very sloppy formulation, albeit useful shorthand. To cite Wikipedia,

In classical thermodynamics, the concept of entropy is defined phenomenologically by the second law of thermodynamics, which states that the entropy of an isolated system always increases or remains constant. Thus, entropy is also a measure of the tendency of a process, such as a chemical reaction, to be entropically favored, or to proceed in a particular direction. It determines that thermal energy always flows spontaneously from regions of higher temperature to regions of lower temperature, in the form of heat. These processes reduce the state of order of the initial systems, and therefore entropy is an expression of disorder or randomness. This picture is the basis of the modern microscopic interpretation of entropy in statistical mechanics. Here one defines entropy as the amount of information needed to specify the exact physical state of a system, given its thermodynamic specification. The second law is then a consequence of this definition and the fundamental postulate of statistical mechanics.

My understanding is that the Second Law basically says that any closed system will tend toward thermal equilibrium (this is also a sloppy formulation, but reflects the fact that my own understanding is vague): Heat spontaneously flows from hotter bodies to colder bodies, never the reverse; and without external energy input a system will always end up thermically uniform.

This is enough to make me think that, while I am not a physicist and will never be so foolhardy as to attempt to use my vague understanding of thermal entropy to formulate an argument, at least I have a general sort of idea what it’s about.

The problems begin when entropy is used in contexts of information and “disorder”. A fairly typical example may be cited from The Matter Myth by John Gribbin and Paul Davies, which I am currently reading:

A simple example of irreversible change from order into chaos occurs whenever a new deck of cards is shuffled. Starting with the cards in numerical and suit order, it is easy to jumble them up. But days of continuous shuffling would fail to reorder them, even into separate suits, let alone numerically.

Of course this is an informal example meant to convey the basics of the concept to a layman rather than an operational definition—but for this layman, at least, it’s inadequate to explain the concept because it smacks of the specified complexity fallacy: How do we determine that the sorted deck of cards is in a particularly “ordered” state to begin with? Why is a deck sorted by suit and value any more or less “entropic” than a deck in any given “shuffled” state? Of course we can look at it and say “Oh, well, that looks rather ordered, doesn’t it?”, but intuitive as this may be it comes up a bit lacking in the rigour department. As is often pointed out elsewhere, the sequence “1,5,4,2,3” is just as unique an arrangement as “1,2,3,4,5”. (In a certain sense, a jumbled sequence contains more information than an orderly one because it cannot be expressed as concisely, mathematically speaking. Is this significant?)

Ultimately, my confusion comes down to this question: I get that entropy in the context of information involves a transition from more ordered to less ordered states, but how is this order defined? Clearly we cannot involve subjective human judgement, but the commonly used deck-of-cards example implicitly relies upon it. I would be a much happier reader of layman-accessible books on physics, cosmology, and the like would be kind enough to explain both parts of this rather than just the first.

If some itinerant physicist chances upon this post and is able to comment, e.g. with an explanatory example or even with something to the effect of “Here’s the math; unfortunately there’s no easier way to explain it so take it or leave it”, I would be grateful (knowing there’s a difficult answer is much better than acting like there’s no question in need of one). Barring that, I hope at least that the act of writing this post has helped focus my thoughts on what to seek in further reading.

haggholm: (Default)

PSA: In spite of a recent press release from an organisation no less respectable than the WHO, and the consequent media furor from less respectable sources, there is in fact no good reason to think that cell phones are responsible for increases in cancer. A few facts:

  • The WHO has reported on the same research before. Last time, they interpreted it as meaning that cell phones don’t increase risk of cancer. This time, they seem to interpret it as meaning that they do. As far as anyone knows, nothing’s changed except their reporting.

  • There have been many studies on this, the general concensus of which are that there’s no link between cell phone use and incidence of brain cancers. In fact, it’s not even plausible on a basic science level.

  • The “smoking gun” study has an enormous flaw: It’s a retrospective study based on asking brain cancer patients about their cell phone usage a decade ago. Would it be surprising, given their circumstances, and given that they know that the possibility of causation has been floated about, if they accidentally overestimated their usage?

  • Even after what looks to be jumping to conclusions, cell phone usage has been placed in the same rather tentative risk category of cancer causation as—pay attention—pickled vegetables and coffee.

More here (good and succinct and fairly brief), and here (comprehensive and not at all brief).

haggholm: (Default)

Via Carl Zimmer:

A single spoonful of seawater might harbor a billion viruses. Most of those viruses proved to be bacteriophages—in other words, they infect bacteria. That's not surprising, because the most abundant hosts in the oceans are microbes. But what is surprising is the effect that those marine phages have on life in the sea. Viruses kill half of all the bacteria in the oceans every day.

haggholm: (Default)

Nothing much to say, but an interesting link to share. A while back I wrote about phages, the viruses that infect bacteria (and got some good comments, I might add). Today, I happened upon this exchange between science writer Carl Zimmer (author of several great books) and Timothy Lu, an MIT researcher on phages. I suggest you go read it if this stuff interests you at all.

Fortunately, in the past few decades, there has been a renaissance brewing in the phage world. Commercial, government, and academic labs have begun to tackle the fundamental issues that have held back phage therapy using rigorous molecular tools. To use phages to effectively treat bacterial contaminations, these labs have been developing technologies for classifying bacterial populations, identifying the right combination of phages to use, and optimizing phage properties using evolutionary or engineering approaches.

haggholm: (Default)

These days scientists have a much clearer picture of our inner ecosystem. We know now that there are a hundred trillion microbes in a human body. You carry more microbes in you this moment than all the people who ever lived. Those microbes are growing all the time. So try to imagine for a moment producing an elephant’s worth of microbes. I know it’s difficult, but the fact is that actually in your lifetime you will produce five elephants of microbes. You are basically a microbe factory.

The microbes in your body at this moment outnumber your cells by ten to one. And they come in a huge diversity of species—somewhere in the thousands, although no one has a precise count yet. By some estimates there are twenty million microbial genes in your body: about a thousand times more than the 20,000 protein-coding genes in the human genome. So the Human Genome Project was, at best, a nice start. If we really want to understand all the genes in the human body, we have a long way to go.


In the September 2010 issue of the journal Microbiology and Molecular Biology Reviews, a team of researchers looked over this sort of research and issued a call to doctors to rethink how they treat their patients. One of the section titles sums up their manifesto: “War No More: Human Medicine in the Age of Ecology.” The authors urge doctors to think like ecologists, and to treat their patients like ecosystems.


Here’s one crude but effective example of what this kind of ecosystem engineering might look like. A couple years ago, Alexander Khoruts, a gastroenterologist at the University of Minnesota, found himself in a grim dilemma. He was treating a patient who had developed a runaway infection of Clostridium difficile in her gut. She was having diarrhea every 15 minutes and had lost sixty pounds, but Khoruts couldn’t stop the infection with antibiotics. So he performed a stool transplant, using a small sample from the woman’s husband. Just two days after the transplant, the woman had her first solid bowel movement in six months. She has been healthy ever since.

Carl Zimmer, The Human Lake

haggholm: (Default)

A while back, I wrote a post about 3D movies and what I dislike about them. Note that I focus on the negatives, which does not mean that I think there are no positives—but I’m certainly not generally impressed, mostly because while I think true 3D graphic displays are a wonderful idea, I don’t think the cinema is the place for it. I mentioned problems like the fact that the faux-3D movie makes me expect parallax effects, which do not exist; and that I find the illusion of different focal depths in the image both tiring and irritating.

These problems could, certainly in theory and if not today, then certainly in future practice, be solved for interactive computer simulations—like games—by tracking head movements (to solve the parallax problem: recall this awesome hack) and eye movements, pupil dilations and contractions, to adapt image “fuzziness” to create a truly convincing illusion of focal depth. However, it would not easily lend itself to movies, because it requires adapting the image to the individual viewer, and besides it would be impossible by any technique or technology I know of to even capture the data, except for 3D generated CG.

In a letter to Roger Ebert, veteran editor Walter Murch has described another problem relating to focal depth that I hadn’t thought about:

The biggest problem with 3D, though, is the “convergence/focus” issue. A couple of the other issues – darkness and “smallness” – are at least theoretically solvable. But the deeper problem is that the audience must focus their eyes at the plane of the screen – say it is 80 feet away. This is constant no matter what.

But their eyes must converge at perhaps 10 feet away, then 60 feet, then 120 feet, and so on, depending on what the illusion is. So 3D films require us to focus at one distance and converge at another. And 600 million years of evolution has never presented this problem before. All living things with eyes have always focussed and converged at the same point.

True enough, and as the article goes on to say, this may well account for a good deal of fatigue and headaches that 3D moviegoers experience. I wonder how, even if, this could feasibly be solved by a single-user adaptive system. I suppose if the display used a technology like (very very low-power!) lasers whose angle of striking the eye could be varied depending on focal depth…

haggholm: (Default)

To some, science is all about models.

Exactly what is the purpose of science? It depends on whom you ask. Some might say that it aims to find the ultimate reality of things (to the best of our ability). Others might say that this is, in fact, a ludicrous idea. All that we can do is to construct the best models possible to predict and describe reality. I think this is a useful way to think of it.

You may have heard the joke about the physicist who was asked to help a dairy farmer optimise milk yield, worked on his calculations for a few weeks, and came back confident that he had found a solution: …First, let’s assume that we have a perfectly spherical cow in a vacuum…. Well, rather than poke too much fun at this fictional physicist, I actually think that (if his math was right) this is actually a very good model. The reason why I think so is that it’s immediately clear that, if the calculations based on the model are valid at all (and checking that is what empirical science is for), there’s a domain in which it is useful, but we are not tempted to extend the analogy beyond its proper domain. In a slightly more realistic example, we could calculate the features of a head-on collision between cars and consider them as simple lumps of material, and use a model that we originally devised to figure out the collision of lumps of clay. The model is useful (in that we can calculate some features of inelastic collisions), but we are not tempted to extend the car–clay analogy beyond the domain in which the model is useful: We know that cars behave very unlike lumps of clay in many ways (just as we know that cows sometimes act decidedly un-spherically, e.g. in their mode of locomotion).

So far, so good. What these models have in common is that they use examples from common experience (we are all passingly familiar with the ideas of cows, cars, spheres, and lumps of clay) to explain other phenomena that take place more or less in the realm of common experience. However, things can get very different when we move to radically different scales. Consider, for instance, when we use models (analogies) to explore and explain large-scale features of space, or small-scale features of particles on the atomic or subatomic level.¹

Consider the atom.

You may know that the atom has a nucleus and a bunch of electron. So, it looks like a lump of spheres (positively charged protons and electrically neutral neutrons) comprising the nucleus, orbited by a bunch of negatively charged electrons, equal in number to the protons. It all looks rather like a solar system with the electrons standing in for planets, orbiting their star, the nucleus. And this is a very good model that is not only evocative to us laymen, but has helped scientists figure out all sorts of things about how matter works. We can even elaborate the model to say that electrons spin about their axis, just like planets do; and they can inhabit different orbits—and change orbits—just like planets and satellites can.

Unfortunately, virtually nothing I said in the preceding paragraph is really true in any fundamental sense about what atoms are really like. It’s true that Niels Bohr’s model of the atom is rather like that, and that it is indeed helpful—but that’s not what the atom is like. It is here that the common sense familiarity of the model deceives us laymen into overextending the analogy to domains where it is no longer useful and valid. For instance, when I say that an electron can change its orbit, you may imagine something like a man-made satellite orbiting the Earth at 30,000 km, whose orbit decays gradually to 20,000 km. But an electron behaves nothing like that: It can only ever exist in certain orbits, and when an electron goes from one energy state to another, it goes there immediately. It is physically incapable of being in between. It’s as though our satellite suddenly teleported from its higher to its lower orbit, but even weirder because the satellite not only doesn’t, but in fact cannot ever inhabit an altitude of, say, 15,000 km.

Nor is it true that electrons and other subatomic particles spin in the sense that planets do. It’s certainly true that they have certain properties, called spin, and that if you use the same mathematics to work out their consequences as you would usually apply to simple spinning objects, you get good results. In this sense, a spinning sphere is a good model for a spinning subatomic particle. But I gather that there are at least some subatomic particles that have the rather curious property that they have to go through two full revolutions to get back where they started.

In fact, when I said that the atom looks like a lump of spheres, et cetera, even that isn’t fundamentally true. Everything you perceive with your sense of sight consists of objects with considerable spatial extent, and your sight is a complicated function of the fact that photons of varying energy levels bounce off them. You can’t do this with atomic nuclei, let alone electrons, because they are too small. Shoot a photon at an atom and you will change the atom. (The fact that quarks are said to have colour is pure whimsy. Consider the fact that one type of quark is “charm”, and only one of them is formally labelled “strange”.)

One of my favourite ideas—not an idea of my own, mind!—is due to John Gribbin, author of In Search of Schrödinger’s Cat: Quantum Physics and Reality. In mentioning peculiar things like the above, where electrons ‘orbit’ nuclei and electrons ‘spin’, he suggests that you may cleanse your mind of misleading connotations by reminding yourself that these properties are not really the ones you’re familiar with at all. Instead, he suggests (if I recall correctly) not that spinning electrons orbit nuclei, but rather that gyring electrons gimbal slithy toves—I’m sure I’m getting this slightly wrong, but I am very sure that I am using the right words from Lewis Carroll’s Jabberwocky.

`Twas brillig, and the slithy toves
  Did gyre and gimble in the wabe:
All mimsy were the borogoves,
  And the mome raths outgrabe.

The key observation to keep in mind is that, to paraphrase Richard Dawkins, humans have evolved to have an intuitive grasp of things of middling size moving at middling speed across the African savannah. They make sense to us on a visceral level. The large and the small—stars and galaxies and clusters, or atoms and electrons and quarks; the slow and the fast—evolution and geological change and stellar evolution, or photons and virtual particles and elemental force exchange: We did not evolve to comprehend them, because comprehension thereof had no adaptive value for Pleiocene primates (well, and the data were not available to them). We can only describe and predict them using science and mathematical models, and we can only make them seem comprehensible by constructing models and analogies that relate them to things that we do seem to understand.

But every so often we encounter natural phenomena that just don’t seem to make sense. The quantum-mechanical particles that simultaneously travel multiple paths and probabilistically interfere with themselves appear to contradict all common sense. But perhaps this is only because we attempt to think of them as little balls moving in much the way that little balls do, only faster and on smaller scales. It may be mathematically sensible to say that an electron travels infinitely many paths at once, with differential probability—which is clearly contradictory nonsense. Right? But in reality, there’s no such thing as an electron simultaneously travelling multiple paths; it’s just outgrabing rather mimsily. And the only reason why we have to model it as though our middling-scale phenomenon of probability were at issue is that we don’t have the ability to appreciate the gyring and gimbling of the slithy toves.

Now, if you enjoyed reading that, you’ll enjoy Gribbin’s In Search of Schrödinger’s Cat even more. Go forth and read it.

¹ It is a fact, sometimes held up as remarkable, that on a logarithmic scale of size from elemental particles to the universe as a whole, we’re somewhere in the middle. Once or twice, I have even heard people mention this as though it were imbued with mystical significance, in a sort of muddled anthropic principle:

On the smallest scales, elemental particles are too simple to do anything very interesting; on the largest scales, the universe is just a highly dilute space with some fluffy lumps called galaxy clusters floating around. In the middle, where we are, is where the interesting stuff happens: Large enough to combine the elemental effects into highly intricate patterns, but not so large that the patterns average out.

Such thinking is pretty fluffy and dilute—of course (the weak anthropic principle) we are necessarily on a level where “interesting” stuff happens, as reflecting brains would not occur on levels where they cannot, but I also think that this is an artefact of thinking that could apply to any phenomenon logically intermediate between other levels, so long as it’s the intermediate level that the observer is interested in.

On the smallest scales, cellular respiration just deals with chemical reactions too elemental to be very interesting; on the largest scales, muscle fibres just aggregate in big lumps that do nothing more than produce boringly linear forces of contraction. In the middle, where the individual cells are, is where the interesting stuff of semipermeable membranes, ionic drives, mitosis, protein synthesis and folding, and all that happens: Large enough to combine the elemental chemical reactions into highly intricate patterns, but not so large that the patterns average out.

haggholm: (Default)

I think it’s pretty well established that the Sapir-Whorf hypothesis is not true—at least not in the strong version that states that thought is formed by language, as immortalised in Orwell’s Nineteen eighty-four, where Oceania’s totalitarian regime attempts to eradicate the very idea of liberty by removing any linguistic means of expressing it.

Modern research shows that this couldn’t possibly work. You don’t think in English, or Newspeak, or any other human language. Rather, you think in what neurolinguist Steven Pinker calls Mentalese—that is, your brain has its own internal representations of concepts, presumably more amenable to storage in neuron connections, axon strength, or however it is that your brain actually does store things.

But this does not mean that there is nothing to it, and the area is still controversial but still actively researched, with some researchers arguing that there do exist real effects and others arguing that there are none. Famous examples include various words for directions: English speakers most comfortably use relative terms like in front of, behind, left, right, and so forth, while there are languages—some South American and Australian ones, I believe—where no such words exist: Instead all directions are expressed as cardinal (“compass”) directions, thus you would not be left of the house, but instead north of the house. And, some research indicates, native speakers of such languages perform better than English-speakers at tasks involving cardinal directions, but worse at tasks where relative arrangement is crucial.

And personally, I know that while there are many things that I don’t remember any context for whatsoever, there are some facts whose source and linguistic context I recall very precisely. For instance, I know that I first learned the verb imagine from The Legend of Zelda: A Link to the Past for the SNES, where the antagonist, at the final confrontation, declares that I never imagined a boy like you could give me so much trouble. It’s unbelievable that you defeated my alter ego, Agahnim the Dark Wizard, twice! (I admit that, significantly, I did not recall the exact phrasing.)

Different languages express things in very different ways. I could quickly rifle through Steven Pinker’s The Language Instinct to find some really interesting examples, but instead I’ll just recommend it to you as a wonderful book on mind and language and go directly to a more pertinent example. I am currently re-reading Simon Singh’s (excellent) The Code Book, which—in discussing the cryptography used during World War II—contains a brief section on the Navajo language. It has some features very alien to speakers of Germanic languages. For instance, nouns are classified by ‘genders’ very unlike, say, the Romance language masculine and feminine nouns, or the Swedish ‘n’ versus ‘t’ genders. Instead, you have families like “long” (things like pencils, arrows, sticks), “bendy” (snakes, ropes), “granular” (sand, salt, sugar), and so on. Conjugation can get pretty complex.

But Navajo is also one of the languages that contain a rule of conjugation that I find extremely interesting: If you make a statement in Navajo, it will be grammatically different depending on whether you describe something you saw for yourself or something you know by hearsay.

I find this very fascinating and also very, well, useful. I wish English had rules like this! In fact, I wish it had at least four: One for things I have experienced myself; one for things I have by hearsay; one for things which I believe because it is my impression that evidence overwhelmingly favours them; and one for things that I do not necessarily believe at all. I’ll think of them as “eyewitness”, “hearsay”, “reliable”, and “unreliable”.

Now, I believe it’s fact that we all tend to suffer from some degree source amnesia. That is, you go through life and absorb all kinds of factual statements (correct or incorrect). At the time when you hear a claim, you will hopefully evaluate its reliability based on its source—peer-reviewed science, expert opinion, intelligent layman, speculative, uninformed. (I roughly ordered them.) However, as time goes by, we have a tendency to remember putative facts but forget their sources. Thus, with time you risk ending up with a less reliable picture of a field of knowledge wherein you imbibed many different putative facts, as you start to forget which facts came from what sources and so which facts are more or less reliable.

And from all that, I can finally ask the question currently on my mind: Given that language may have some effect on one’s thinking, and given that word choice may stick with the memory of a putative fact, would imbibing putative facts in the context of a language wherein the source is grammatically encoded help us to retain the memory, if not of precise source, then at least of a form of ‘credibility rating’?

Sadly, of course, it’s a question I am completely unable to answer. I wonder if any experiments have been run. If not, I wonder if anyone could gather up some Navajo volunteers and find out…

haggholm: (Default)

Fun fact: You are almost certainly a mutant. Estimates vary, but according to most sources I’ve found, the average person carries 50–100 mutations. Most genetic variation in any population, though—including human populations—is not due to mutation, but recombination via sexual reproduction of extant genetic polymorphism: In layman’s terms, different alleles of any given gene (e.g. versions A and B for position P on chromosome C) already exist in the population, and novel phenotypes appear simply because there are so many possible combinations.

On an important side note, it’s good to know that when biologists talk about genetic similarity, it may mean different things depending on the context. For instance, you may have heard that we share (something like) 99% of our DNA with chimpanzees and bonobos. This is true in a pretty straightforward sense (but not entirely straightforward because the identical stretches don’t always line up). On the other hand, you may also have heard that you share exactly ½=50% of your [nuclear] genes with each of your parents, and on average ½=50% with each of your siblings (exactly ¼=25% with each of your grandparents; on average ¼=25% with each of your aunts and uncles; on average ⅛=12.5% with each of your cousins; et cetera). Clearly, you are rather more genetically similar to your grandmother than to a chimpanzee. The disconnect is because when we’re measuring kin relationships, we’re interested in the similarity in the polymorphic part of the genome. That is, an awful lot of genes are identical in all humans. Among the ones that might vary between any two individuals, you’ll inherit exactly half from each parent. (The similarity might be higher than 50%, because your parents might share identical alleles, but still, you received exactly 50% from each of them thanks to the magic of meiosis. —Additionally, of course, you received all of your mitochondrial genes from your mother.)

In highschool biology, you will have talked about Mendelian inheritance. This is the familiar topic of dominant/recessive genes. In more depth, Mendelian inheritance is concerned with phenotypic effects of single-locus variation; that is, differences resulting from whether I have variation A or B of a particular gene. This is actually very rare. Most variation is not Mendelian. For instance, it’s not like there’s a single locus on your genome where the “height” gene goes, so that if you have the “tall” gene you are tall but if you have the “short” gene you are short. Instead, there are lots and lots of different genes that affect your production of growth hormones, your body type and proportions, and so forth, and the genetic factor that affect your height is the combination of all of these many different genes. (These traits are called polygenic. The “converse” of polygeny is pleiotropy, in which a single gene affects many phenotypic traits. This is also common. What, did you think genetics was simple?)

Mendelian traits are fun, though. Wikipedia has a list of some known Mendelian traits in humans.

TraitDo I have it?
Ability to taste phenylthiocarbamideNo idea.
Albinism (recessive)No.
Blood type…Well, clearly I have a blood type. I don’t actually know mine.
Brachydactyly (Shortness of fingers and toes)No.
Cleft chin (dominant)Fortunately not.
Cheek dimples (dominant)Nope.
Free (dominant) or attached (recessive) earlobesFree (dominant).
Wet (dominant) or dry (recessive) earwaxWet (dominant). Incidentally, dry earwax is fairly common in East Asians. Allegedly, this is correlated with less body odour.
Face freckles (dominant)No.
Hitchhiker's thumb (recessive)Yes! I have a somewhat uncommon recessive Mendelian trait!
Sexdactyly (Six fingers/toes)No, but it would be cool.
Sickle-cell trait (also considered co-dominant)Not that I am aware of.
Widow's peak (dominant)Alas yes, very pronounced. I will comfort myself with the fact that Wikipedia also claims that In stories this trait is associated with a villain, such as in the case of Count Dracula.

Interestingly, some (literal) textbook examples of Mendelian inheritance are not strictly Mendelian—Wikipedia has a list of Traits previously believed to be Mendelian, which includes eye colour, hair colour, Morton’s Toe (I can’t say I ever had a position on this one), and even that most archetypal of examples, the ability to roll your tongue. (Of course, even if polymorphism for this trait exists in more than one locus, it may still be that polymorphism at a single locus is the most common source of variation.)

haggholm: (Default)

If the title of this blog post seems awfully miscellaneous to you, let that be a warning: The post will be likewise.

Recently, I have read a lot of books on evolution and biology that talk about the role of parasites (and bacteria and other agents of disease) in evolution, most importantly two books: Carl Zimmer’s Parasite Rex (an excellent book, read this!) and Matt Ridley’s The Red Queen (also a good book, although I found it less riveting than Zimmer’s). Both of these books present or allude to the idea that parasites and infection have been immensely powerful driving forces behind evolution. In fact, this is a leading explanation for why sex exists (to mix up lock-and-key molecules in our immune system and confuse the efforts of parasites to evolve counters to our immune defences).

Personally, I accept this intellectually but find it hard to really take to heart; growing up with Sir David Attenborough I tend to think of selection pressures as, say, cheetahs hunting down Thompson’s gazelles—very clear and obvious what the pressures and consequences are for each participant. When contemplating ungulate evolution, a cheetah or a wolf looks like a much greater pressure than a cestode or a tick.

And of course books that go into detail on parasites will emphasise those points—Carl Zimmer’s book is called Parasite Rex, for crying out loud; of course he’ll focus on that, and even if he’s completely correct and honest I may come away with a bias toward the things he talks about because, well, that’s what my recent reading has been all about. So is the pressure so vast, or is my perspective biased by my selection of literature?

Well, the other day, having attended a talk at UBC, I happened to be sitting next to PZ Myers at the pub. This seemed like a golden opportunity to ask this question of a biologist who has read or studied evolution an awful lot—a biologist, furthermore, who as far as I know does not have a horse in the parasite race. The answer was enlightening and interesting. PZ told me that when you look at the human genome you can estimate what genes have been subject to most natural selection. As I understand it, you basically measure ratios of synonymous to non-synonymous changes.

An aside, as I understand it (PZ did not go into this in depth, but I believe it is fairly simple; correct me if I am wrong!). Synonymous change is a mutation where even if the gene is translated to protein, there is no change in function, e.g. because the change is between two amino acids that do the very same thing. (A non-synonymous change is, obviously, one where there is a change in function if the gene is transcribed.) If a gene is constrained by natural selection, it will tend to keep its function, while a gene with no selection pressures acting on it is free to vary randomly and will pick up non-synomymous mutations. In fact, scientists use junk DNA—DNA that is never translated to proteins—as a sort of genetic clock: Because the DNA doesn’t do anything, it will drift over time, picking up M mutations every N years, so you can measure how long two lineages have been split by checking how far their junk DNA has diverged.

Back to what PZ said, then. He told me that when you look at the human genome, there are two areas where almost all the changes have happened over the past several million years: The immune system, and genes for sperm recognition (by which the ovum knows to admit sperm and reject foreign matter like bacteria and fungal spores). Every other change is, by comparison, rare and incidental. This means two things: First, the parasites are primary forces in providing selection pressure notion is confirmed. Second…sperm recognition; what’s that?

Sperm recognition is a feature I had never heard about, but an extremely important one (as shown by the huge selection pressures). Women, of course, have various problems, one being that their reproductive tract offers a helpful avenue for pathogens to enter their bodies. Because ova are limited in number, they need to be protected. Therefore, it is extremely important that when they fuse with anything, it is a genuine fertilisation. It would be extremely maladaptive if ova fused with every bacterium or fungal cell that made its way to the Fallopian tubes before the immune system could get to them. Therefore, ova are equipped with chemical receptors that recognise various molecules on the coatings of sperm cells and allow only cells with valid ‘key’ molecules inside.

This, PZ said as an aside, is one common cause of difficulty conceiving. There’s a fair amount of polymorphism in the population: Two men’s sperm cells don’t carry the exact same complement of molecular keys to engage the chemical locks on the ova; two women’s ova don’t have the exact same chemical locks. In some rare cases, two perfectly fertile inviduals of opposite sex may be unfortunate enough that this particular man’s sperm carry a set of keys, and the woman’s ova a set of locks, but none of the keys fit the locks—even though each of them may be perfectly able to have children with most people, they cannot have children together.

This is what I learned over a beer with PZ Myers. Incidentally, I gave him an official Pope card. I got it a long time ago from [livejournal.com profile] wildmage and I was sad to part with it, but who better to hold papacy?

haggholm: (Default)

There are few things so sure to annoy me as hype. Among those few things, of course, is factual inaccuracy. For both of these reasons, the new phenomenon¹ of 3D movies annoys me.

I will concede that in a narrow, technical sense, these movies are indeed 3D in that they do encode three spatial dimensions—that is, there is some information about depth encoded and presented. However, I don’t think it’s all that good, for various reasons, and would be more inclined to call it, say, about 2.4D.

Our eyes and brains use various cues for depth perception. The obvious ones that leap out at me, if you’ll excuse the pun, are

  1. Stereoscopic vision
  2. Focal depth
  3. Parallax

Let’s go over them with an eye (…) toward what movie makers, and other media producers, do, could do, and cannot do about it.

1. Stereoscopic vision

Odds are very good that you, gentle reader, have two eyes. Because these eyes are not in precisely the same location, they view things at slightly different angles. For objects that are far away, the difference in angle is very small. (Astronomers deal mostly with things at optical infinity, i.e. so far away that the lines of sight are effectively parallel.) For things that are very close, such as your nose, the difference in angle is very great. This is called stereoscopic vision and is heavily exploited by your brain, especially for short-distance depth perception, where your depth perception is both most important and most accurate: Consider that you can stick your hand out just far enough to catch a ball thrown to you, while you surely couldn’t estimate the distance to a ball fifty metres distant to within the few centimetres of precision you need.

“3D” movies, of course, exploit this technique. In fact, I think of these movies not as three-dimensional, but rather as stereoscopic movies. There are two or three ways of making them, and three ways I’m aware of to present them.

To create stereoscopic footage, you can…

  1. …Render computer-generated footage from two angles. If you’re making a computer-generated movie, this would be pretty straightforward.

  2. …Shoot the movie with a special stereoscopic camera with two lenses mimicking the viewer’s eyes, accurately capturing everything from two angles just as the eyes would. These cameras do exist and it is done, but apparently it’s tricky (and the cameras are very expensive). Consider that it’s not as simple as just sticking two cameras together. Their focal depth has to be closely co-ordinated, and for all I know the angle might be subtly adjusted at close focal depths. I believe your eyes do this.

  3. …Shoot the movie in the usual fashion and add depth information in post-processing. This is a terrible idea and is, of course, widely used. What this means is that after all the footage is ready, the editors sit down and decide how far away all the objects on screen are. There’s no way in hell they can get everything right, and of course doing their very very best would take ridiculous amounts of time, so basically they divide a scene into different planes of, say, “objects close up”, “objects 5 metres off“, “objects 10 metres off”, and “background objects”. This is extremely artificial.

All right, so you have your movie with stereoscopic information captured. Now you need to display it to your viewers. There are several ways to do this with various levels of quality and cost effectiveness, as well as different limitations on the number of viewers.

  1. Glasses with different screens for the two eyes. For all I know this may be the oldest method; simply have the viewer or player put on a pair of glasses where each “lens” is really a small LCD monitor, each displaying the proper image for the proper eye. Technically this is pretty good, as the image quality will be as good as you can make a tiny tiny monitor, but everyone has to wear a pair of bulky and very expensive glasses. I’ve seen these for 3D gaming, but obviously it won’t work in movie theatres.

  2. Shutter glasses. Instead of having two screens showing different pictures, have one screen showing different pictures…alternating very quickly. The typical computer monitor has a refresh rate of 60 Hz, meaning that the image changes 60 times every second. Shutter glasses are generally made to work with 120 Hz monitors. The monitor will show a frame of angle A, then a frame of angle B, then A, and so on, so that each angle gets 60 frames per second. The way this works to give you stereoscopic vision is that you wear a pair of special glasses, shutter glasses, which are synchronised with the monitor and successively block out every alternate frame, so that your left eye only sees the A angle and your right eye only sees the B angle. Because the change is so rapid, you do not perceive any flicker. (Consider that movies look smooth, and they only run at 24 frames per second.)

    There’s even a neat trick now in use to support multiplayer games on a single screen. This rapid flicking back and forth could also be used to show completely different scenes, so that two people looking at the same screen would see different images—an alternative to the split-screen games of yore. Of course, if you want this stereoscopic, you need a 240 Hz TV (I don’t know if they exist). And that’s for two players: 60 Hz times number of players, times two if you want stereoscopic vision…

    In any case, this is another neat trick but again requires expensive glasses and display media capable of very rapid changes. OK for computer games if you can persuade gamers to buy 120 Hz displays, not so good for the movie theatre.

  3. The final trick is similar to the previous one: Show two images with one screen. Here, however, we do it at the same time. We still need a way to get different images to different eyes, so we need to block out angle A from the right eye, &c. Here we have the familiar, red/green “3D” glasses, where all the depth information is conveyed in colours that are filtered out, differently for each eye. Modern stereoscopic displays do something similar but, rather than using colour-based filter, instead display the left and right images with different polarisation and use polarised glasses for filtering. This reduces light intensity but does not have the effect of entirely filtering out a specific part of the spectrum from each eye.†

To summarise, there are at least three ways to capture stereoscopic footage and at least three ways to display it. Hollywood alternates between a good and a very bad way of capturing it, and uses the worst (but cheapest) method to display it in theatres.

2. Focal depth

All right, lots of talk but all we’ve discussed is stereoscopic distance. There are other tricks your brain uses to infer distance. One of them is the fact that your eyes can only focus on one distance at a time. If you focus on something a certain distance away, everything at different distances will look blurry. The greater the difference, the blurrier.

In a sense, of course, this is built into the medium. Every movie ever shot with a camera encodes this information, as does every picture shot with a real camera—because cameras have focal depth limitations, too.

The one medium missing this entirely is computer games. In a video game of any sort, the computer cannot render out-of-focus things as blurry because, well, the computer doesn’t know what you are currently focussing on. It would be very annoying to play a first-person shooter and be unable to make out the enemy in front of you because the computer assumes you’re looking at a distant object, or vice versa. Thus, everything is rendered sharply. This is a necessity, but it is a necessary evil because it makes 3D computer graphics very artificial. Everything looks sharp in a way it would not in real life. (The exception is in games with overhead views, like most strategy games: Since everything you see is about equally distant from the camera, it should be equally sharp.)

Personally, however, I have found this effect to be a nuisance in the new “3D” movies. When you add the stereoscopic dimension to the film, I will watch it less as a flat picture and more as though it truly did contain 3D information. However, when (say) watching Avatar, looking at a background object—even though stereoscopic vision informs me that it truly is farther away, because my eyes receive the same angle—does not bring it into focus.

This may be something one simply has to get used to. After all, the same thing is in effect in regular movies, in still photography, and so on.

Still, if I were to dream, I should want a system capable of taking this effect into account. There already exist computers that perform eye-tracking to control cursors and similar. I do not know whether they are fast enough to track eye motion so precisely that out-of-focus blurring would become helpful and authentic rather than a nuisance, but if they aren’t, they surely will be eventually. Build such sensors into shutter glasses and you’re onto something.

Of course, this would be absolutely impossible to implement for any but computer generated media. A movie camera has a focal distance setting just like your eye, stereoscopic or not. Furthermore, even if you made a 3D movie with computer graphics, in order to show it with adaptive focus, it would have to simultaneously track and adapt to every viewer’s eye movements—like a computer game you can’t control, rather than a single visual stream that everyone perceives.

3. Parallax

Parallax refers to the visual effect of nearby objects seeming to move faster than far-away ones. Think of sitting in a car, watching the light poles zoom by impossibly fast, while the trees at the side of the road move slowly, the mountains only over the course of hours, and the moon and stars seem to be entirely fixed. Parallax: Because nearby objects are close to you, your angle to them in relation to the background changes more rapidly.

Of course, in a trivial sense, every animated medium already does capture this; again, it’s not something we need stereoscopic vision for. However, at close distances, a significant source of parallax is your head movement. A movie can provide a 3D illusion without taking this into account…so long as you sit perfectly still, never moving your head while a close-up is on the screen.

As with focal depths, of course, this is viewer-dependent and completely impossible to implement in a movie theatre. However, it should be eminently feasible on home computers and game systems; indeed, someone has implemented headtracking with a Wii remote—a far more impressive emulation of true three-dimensionality than any amount of stereoscopic vision, if you ask me.

Combined with eye tracking to monitor focal depth, this would be amazing. Add stereoscopic images and you’d have a perfect trifecta—I honestly think that would be the least important part, but also the easiest (the technology is already commercially available and widespread), so it would be sort of silly not to add it.


After watching a “3D” movie or two, I have come away annoyed because I felt that the stereoscopic effect detracted rather than added. Some of this is doubtless because, being who I am, the hyped-up claim that it truly shows three dimensions properly² annoys me. Some of it, however, is a sort of uncanny valley effect. Since stereoscopic vision tantalise my brain into attempting to regard these movies as three-dimensional, it’s a big turn-off to find that there are several depth-perception effects that they don’t mimic at all. If a movie is not stereoscopic, my brain does not seem to go looking for those cues because there’s no hint at all that they will be present.

Of course, it may just be that I need to get used to it. After all, “2D” movies³ already contain depth clues ([limited] parallax, [fixed] focal depth differences) without triggering any tendency to go looking for more. I haven’t watched a lot of stereoscopic imagery, and perhaps my brain will eventually learn to treat them as images-with-another-feature. For now, however, adding stereoscopic information to productions that can’t actually provide the full 3D visual experience seems to me rather like serving up cupcakes with plastic icing: It may technically be closer to a real cupcake than no icing at all, but I prefer a real muffin to a fake cupcake.

¹ It’s at least new in that only now are they widely shot and distributed.

² Technically all movies do depict three dimensions properly, but these new ones are really looking to add the fourth dimension of depth to the already-working height, width, and time.

³ Which are really 3D; see above.

† This should not have needed to be pointed out to me, as I have worn the damned polarised things, but originally I completely forgot them and wrote this as though we still relied on red/green glasses. Thanks to [livejournal.com profile] chutzman for the correction.

haggholm: (Default)

As I was re-reading Jerry Coyne’s Why Evolution Is True the other night, an aside about bacterial resistance to antibiotics set me to thinking. It’s well known, and to be expected from evolutionary theory, that bacteria—and other pathogenic organisms, such as viruses and eukaryotic parasites—tend to evolve resistance to the drugs designed to wipe them out. It’s fairly obvious: Those drugs exert an extreme selection pressure, as strong as we can make it…thus if the variation exists to produce resistance, we’re positively selecting for it and are guaranteed to end up with it in the end.

As Coyne also pointed out, the response is pretty variable: Diseases like polio and measles haven’t seemed to evolve any resistance at all, while common flu strains evolve so quickly that every flu season is a gamble, where the vaccines available are but a best guess as to what will crop up next. HIV takes the prize as the grand champion of rapid evolution: It evolves so quickly inside each carrier that it eventually evolves its way around the immune system.

This of course reminded me of Orgel’s Second Rule: Evolution is cleverer than you are. As long as medical scientists invent drugs to eradicate pathogenic organisms, they will be exerting selective pressures and will tend to select for resistance. I don’t see that there’s a feasible way around this—short of a drug so astonishingly good that it is always 100% effective (even in the face of imperfect patient compliance!), or a set of rotating drugs so large that no strain can evolve resistance to all at once, it seems impossible; and those conditions seem extremely implausible.

So I wonder, how much medicine is being done that is not only aware of evolution (and the dangers it causes in adaptation to resist drugs), but actually uses it? What follows is very speculative—I’m no biologist or medical expert; I’m just thinking out loud, wondering how much of this is done or has been done; and if not, then why.

One example of a strategy that harnesses evolution would be to design phages. A phage (or bacteriophage), if you don’t know, is a virus that infects a bacterium. This may sound like a rare and exotic sort of being, but phages may well be one of the largest groups of organisms on Earth. Phage therapy makes a sort of intuitive and beautiful sense: Instead of giving a patient a drug that may operate on the basis of differential toxicity (it will damage the bacteria a lot more than the patient), give them something that is absolutely specific to the infectuous agent. (In fact, if Wikipedia is to be believed on this, phage therapy is so specific that this is actually one of the problems: Each phage targets a very specific strain of bacteria, so if the infectuous strain is different…)

But it seems to me that there’s room for so much more. Now that we stand at the early dawn of the age of genetic engineering, perhaps a time will come when phages can be designed to target pathogens for which no useful phages are known. And if the phages we have access to are less than ideal, then why should we not selectively breed them? If we have a disease for which all of Koch’s postulates hold, for instance, and we can retrieve the real, infectuous strains and grow them in culture, then surely we can use those cultures as growth media for phages—evolving phages better able to attack those strains. By wiping out the bacterial cultures and growing subsequent generations of phages on bacteria that lack exposure, the bacteria will have no opportunity to evolve resistance, and we should be able to essentially breed phages to wipe out whatever bacteria we like.

Of course, eventually resistant strains would spread throughout the population. But so what? It’s no big deal. We’d just isolate those strains and selectively breed our phages to target them.

I don’t know whether anything similar could work against viruses. Viruses don’t metabolise, so virophages are pretty damned rare (I’ve only ever heard of one). Are there bacteria that eat viruses? If so, those bacteria could presumably be selectively “bred”…

Another, more modest, evolution-aware strategy would be to deliberately select against virulence. Consider, for example, a hypothetical drug that targets some, but only some strains of flu. Specifically, it suppresses particularly dangerous strains, but is deliberately designed not to target the less harmful ones. Such a strategy would not wipe out flu, of course—but then, we couldn’t do that in the first place. What it might do is give more “benign” strains of flu a competitive advantage over their deadlier cousins. Since all strains of flu work in similar ways, they could be viewed as competitors for the same ecological niche. (I could be wrong about this; I’m speculating, after all!) If we allow a relatively benign strain to fill this niche, but do our best to keep the deadlier version down, then we basically co-operate with the more benign flu at the expense of the more dangerous one. There will be a selective benefit in being benign.

(The inspiration for this speculation is actually evolution in HIV. HIV tends to spread pretty slowly—its epidemic proportions may not make it seem that way, but you really can’t spread HIV as quickly as you can spread a cold. A sneeze just hits more people than—Let’s just say that airborne diseases can spread more quickly. As a result, a strain of HIV that caused the carrier to drop dead within the week would never get off the ground; it would spread more slowly than it killed, which limits a disease to a small group of carriers. In other words, a disease has to spread slowly enough to infect a bunch of other carriers, and how slowly is enough depends on the disease: In case of HIV, pretty slowly. I have read, though I cannot now find a source for this claim, that some widespread strains of HIV are evolving toward slower development to full-blown AIDS, for these reasons.)

I harbour no illusion that any of this is revolutionary to anyone at all professionally competent in biology or medical science—I am at best a reasonably well-read layman. But the notion of evolutionary strategies in medicine are intriguing to me, and I really do wonder how much is being done, how much has been done, how much can and will be done with it. For now, I merely jot down these thoughts and speculations. When I have a little more time, I should do some digging and some reading to see what’s out there.

haggholm: (Default)

Today, I spent a moment in awe of palaeontology.

As a child, I grew up thinking of dinosaurs as all being scaly animals, like lizards. Only in the 1990s were non-avian dinosaur fossils found with feathers preserved…now we know that they have feathers. And now I am starting to come across mentions (in the blogosphere reporting on scientific findings) on analysis of pigment molecules in feathers preserved in fossil dinosaurs (see e.g. here).

Think of that for a moment. The most recent non-avian dinosaur lineages died out about 65 million years ago. 65,000,000 years!¹ And not only do we know of many of the animals that lived back then, what they looked like and what they ate, but in fact we now begin to have a good solid idea of what colour they were. (In fact, is’t quantitatively more amazing still: Most of the dinosaurs with preserved pigments found so far seem, from a quick search, to be about twice that old. Examples: Psittacosaurus, Anchiornis, Sinosauropteryx.)

If this does not amaze you, you haven’t thought about it hard enough. Me, I shall spend a few more moments in awe, contemplating the piece of fossilised oviraptor egg shell I bought at the gift shop of the Swedish Museum of Natural History. (We have so many astonishing things that they can afford to sell something as astonishing as a 135-million-year-old piece of egg shell in the gift shop! That alone astonishes me.)

¹ Finding analogies for numbers that large is a common pastime, it seems. Here’s one I haven’t seen anyone use before: Human hair grows about 15 cm/year, so in 65 Myr, your hair would grow about 10,000 km. For you Americans: That’s more than twice the width of your country. —OK, that’s still too mindbogglingly large a number, 10,000. Well then: 65 Myr is enough time for the average person’s fingernails to grow 1,500 km. Or 300 km’s worth of toenails (they grow five times more slowly), which would take almost three hours to drive past at highway speeds. The time it would take your toenails to grow that long, if they weren’t ever trimmed or abraded, is the time that’s passed since the dinosaurs died out…

haggholm: (Default)

I recently came across a wonderful essay by Isaac Asimov called The Relativity of Wrong. You can, and should, go read it, e.g. here. I will quote what I feel are key parts to understand his arguments, but you really, really ought to go read the whole thing. Really.

I RECEIVED a letter the other day. It was handwritten in crabbed penmanship so that it was very difficult to read. Nevertheless, I tried to make it out just in case it might prove to be important. In the first sentence, the writer told me he was majoring in English literature, but felt he needed to teach me science. …

The young specialist in English Lit, having quoted me, went on to lecture me severely on the fact that in every century people have thought they understood the universe at last, and in every century they were proved to be wrong. It follows that the one thing we can say about our modern "knowledge" is that it is wrong. The young man then quoted with approval what Socrates had said on learning that the Delphic oracle had proclaimed him the wisest man in Greece. "If I am the wisest man," said Socrates, "it is because I alone know that I know nothing." The implication was that I was very foolish because I was under the impression I knew a great deal.

My answer to him was, "John, when people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together."

The basic trouble, you see, is that people think that "right" and "wrong" are absolute; that everything that isn't perfectly and completely right is totally and equally wrong.

In the early days of civilization, the general feeling was that the earth was flat. This was not because people were stupid, or because they were intent on believing silly things. They felt it was flat on the basis of sound evidence. […] Another way of looking at it is to ask what is the "curvature" of the earth's surface Over a considerable length, how much does the surface deviate (on the average) from perfect flatness. The flat-earth theory would make it seem that the surface doesn't deviate from flatness at all, that its curvature is 0 to the mile.

Nowadays, of course, we are taught that the flat-earth theory is wrong; that it is all wrong, terribly wrong, absolutely. But it isn't. The curvature of the earth is nearly 0 per mile, so that although the flat-earth theory is wrong, it happens to be nearly right. That's why the theory lasted so long.

…The Greek philosopher Eratosthenes noted that the sun cast a shadow of different lengths at different latitudes (all the shadows would be the same length if the earth's surface were flat). From the difference in shadow length, he calculated the size of the earthly sphere and it turned out to be 25,000 miles in circumference.

The curvature of such a sphere is about 0.000126 per mile, a quantity very close to 0 per mile, as you can see, and one not easily measured by the techniques at the disposal of the ancients. The tiny difference between 0 and 0.000126 accounts for the fact that it took so long to pass from the flat earth to the spherical earth.

…The earth has an equatorial bulge, in other words. It is flattened at the poles. It is an "oblate spheroid" rather than a sphere. This means that the various diameters of the earth differ in length. The longest diameters are any of those that stretch from one point on the equator to an opposite point on the equator. This "equatorial diameter" is 12,755 kilometers (7,927 miles). The shortest diameter is from the North Pole to the South Pole and this "polar diameter" is 12,711 kilometers (7,900 miles).

The difference between the longest and shortest diameters is 44 kilometers (27 miles), and that means that the "oblateness" of the earth (its departure from true sphericity) is 44/12755, or 0.0034. This amounts to l/3 of 1 percent.

To put it another way, on a flat surface, curvature is 0 per mile everywhere. On the earth's spherical surface, curvature is 0.000126 per mile everywhere (or 8 inches per mile). On the earth's oblate spheroidal surface, the curvature varies from 7.973 inches to the mile to 8.027 inches to the mile.

The correction in going from spherical to oblate spheroidal is much smaller than going from flat to spherical. Therefore, although the notion of the earth as a sphere is wrong, strictly speaking, it is not as wrong as the notion of the earth as flat.

Naturally, the theories we now have might be considered wrong in the simplistic sense of my English Lit correspondent, but in a much truer and subtler sense, they need only be considered incomplete.

haggholm: (Default)

An article I recently read boldly claims that The G-spot 'doesn't appear to exist', say researchers. I read this with a sigh, as I know from experience how greatly distorted any research findings can get when they are published in mainstream media. Clearly, this was an instance of such distortion. I was curious to see what the actual study said, and went off to find it. You may read it here, if you are curious.

Sadly, it wasn’t very distorted after all.

In fact, the press release from the Department of Twin Research & Genetic Epidemiology was worse than the articles I had read. It presents the following conclusion from their study:

The complete absence of genetic contribution to the G-Spot, an allegedly highly sensitive area in the anterior wall of the vagina which when stimulated produces powerful orgasm, casts serious doubt on its existence, suggests a study by the Department of Twin Research to be published in the Journal of Sexual Medicine.

The investigators carried out this study by recruiting 1804 female volunteers from the TwinsUK registry aged 23-83 years. All completed questionnaires detailing their general sexual behavior and functioning, and a specific question on self-perception of the G- Spot. The researchers found no evidence for a genetic basis. This led to the conclusion that – given that all anatomical and physiological traits studied so far have been shown to be at least partially influenced by genes – the G-Spot does not exist and is more a fiction created by other factors e.g. an individual’s own sexual and relationship satisfaction or self-report is an inadequate way to assess the G-Spot and researchers should in future focus more on ultrasound studies.

The impression I took from the mainstream press articles, and which was reinforced by the institute’s press release, was that the existence of the G-spot was inferred to correspond to study participants’ reports of whether they had one. If this were so—if we could determine anatomy by poll—I expect I could find some people with more spleens than kidneys and more livers than lymph nodes.

I took the trouble to read the actual paper (it’s fairly short and quite accessible). The reality turns out not to be quite so bad. The main point—well, let me make an aside here and say that I find it extremely odd that what seemed to be the main point emphasised in the paper was considerably de-emphasised in the press release and consequent mainstream articles, seriously reducing their credibility. Anyway, back to the point:

The main point of the paper is that if the G-spot exists, it is an anatomical structure; if it is an anatomical structure, it is presumably genetically inherited. Even if some women have it while some don’t, we expect to find a strong correlation in twins. Since heterozygotic twins share 50% of their genome, and monozygotic (‘identical’) twins share 100% of their genome, if it’s genetically heritable at all, we should see a correlation in twins, especially monozygotic ones: If one twin has it, the other should (more often than is the case with unrelated people); if one twin does not, the other shouldn’t. Because twins are typically raised in extremely similar environments, even environmental factors should be similar. In particular, monozygotic twins should be more similar to each other than heterozygotic twins for heritable (but not environmental) factors.

Well, this turned out not to be the case: Heterozygous twins report that they have G-spots about as often as do monozygous twins, and this is the real point of the paper. It’s not as spectacular as the mainstream news articles, but I’m surprised that they so failed to emphasise this in their own press release. Ah well, such is the hunt for fame, I suppose.

In the conclusion of the real, scientific paper, the authors are of course forced to admit that

A possible explanation for the lack of heritability may be that women differ in their ability to detect their own (true) G-spots.

They, of course, do not believe this to be the case. We may reasonably ask, why not? And how good is your evidence? My thoughts will be very tentative, because I’m not an expert in any related field; but we may at least reason about it.

First, I will note that the study’s exclusion criteria were, at times, a bit puzzling.

Women who reported that they were homo- or bisexual were excluded from the study because of the common use of digital stimulation among these women, which may bias the results.

I daresay it may bias them! For example, if the G-spot exists, it’s a specific anatomical location inside the vagina. Because it is postulated to be a very specific location, it may be difficult to stimulate with the penis, which is after all not prehensile and may not be angled so as to optimally stimulate a specific location. This postulated spot could perhaps be more easily located and stimulated with the fingers. Therefore, if it does exist, and if we are restricted to self-reporting as evidence, I would expect to find much stronger evidence for this in a population with common use of digital stimulation. The people I would ask first are the people whose answers they discarded. I would be very curious to see how their data are affected if they include this population. What was their rationale for the exclusion criterion? Was it determined beforehand, or after the data were in? Would it contradict their conclusion? What if this population were considered exclusively?

This looks like a very serious weakness to me, as the exclusion criterion seems to be specifically geared towards reaching a particular conclusion. (I can’t think of anything much more damning I could possibly say about a study.) It’s not the only thing that makes me raise an eyebrow, though (but it is the strongest).

Another thing is that, well, some traits just aren’t very heritable. (This is why we measure heritability; if there weren’t variation in how strongly phenotypic traits are associated with genes, there’d be no need.) I suppose the authors may reasonably expect their readership to be familiar with not just the concept of heritability (as I am), but also what kind of numbers we should expect (as I am not). Is a “close to 0” heritability common, or unusual, or rare, or impossible in variable phenotypic traits? Still, it is possible that heritability of the G-spot—not necessarily its existence, but perhaps its precise location and orientation, or its sensitivity—is relatively low. Is the study still powered to detect it? How does this render it more vulnerable to other confounders?

There are various criticisms leveraged against twin studies in general. Twin studies are potentially wonderful tools because monozygotic twins offer unique opportunities to investigate heritability. (Personally, I think the most interesting ones are of that rarity of rarities, pairs of monozygotic twins raised apart; the surprising similarities they show in a very wide range of behavioural traits is strong evidence of genetic conditioning.) But they are not perfect.

And finally, I make the observation that the institute—the Department of Twin Research & Genetic Epidemiology—maintain a database of twins (an awful lot of them: Some 11,000 people). This is great; it enables them to efficiently perform twin studies. However, studying the same sample over and over again is problematic. If you look at the same N people, examining them for different properties over and over again, you’re bound to find an apparent correlation eventually. Think about it: If you pick 100 names at random from a phone book, you’ll expect about half of them to be male, half female; and about 8–15 of them to be left-handed…but if you examined them for blood pressure, and dietary habits, and sexual preferences, and number of children, and so on for any number of questions, it would be bizarre if they were an average sample in every respect. This is a problem with data mining. Clearly, the department’s database is pretty large, but then they’ve already published over 400 research papers. At what number of papers should we statistically expect to find spurious calculations?

All in all, the study was a bit more sensible than mainstream media had me thinking at first, but as research papers go, I found it surprisingly unimpressive. In particular, the exclusion criterion that discarded answers from gay and bisexual women smells very fishy, and I wouldn’t be terribly surprised if it “biased” the results so far as to invalidate their conclusion.

In a general sense, I trust science—I trust the scientific method, and (to a lesser but considerable degree) I trust that scientific consensus will move toward the right answers: Science is often characterised as an asymptotic approach to the truth (we may never know it exactly, but we will get ever closer). However, when considering a single study, one should be cautious. Never trust what the mainstream press says about it at all, whether you like what it says or not—ordinary reporters lack scientific savvy, good science reporters are rare, and after the editors have their say, it’s often dubious whether the scientists behind a finding would agree with anything the press has to say about them except, perhaps, the scientists’ names.

And while the scientific method is excellent, and the scientific consensus is the best approach we have to knowledge, some studies just aren’t worth the paper of the webpages they’re published on. If you want to adjust your opinions according to a single study, read it. Read it critically.

haggholm: (Default)

I’ve been meaning to make a brief post on this “Climategate” thing, but I’ve held off, both because of the annoying lack of creativity that went into coming up with that name, and because the whole thing seemed like a shining example of a storm in a teacup. In fact, I—

—Well, thankfully, I don’t need to explain what I think. Just when I started to feel neglectful for not having done so, I chanced upon this video which not only expresses what I thought, but also adds a bunch of facts I was not aware of.

The short-short version (and my comment when I first saw a snippet of one of the emails) is that the people who think it’s obvious fraud are obviously ignorant of scientific jargon.

haggholm: (Default)

…Despite the variation across time and space, it’s safe to say that most languages, probably all, have emotionally laden words that may not be used in polite conversation. Perhaps the most extreme examble is Djirbal, an Aboriginal language of Australia, in which every word is taboo when spoken in the presence of mothers-in-law and certain cousins. Speakers have to use an entirely different vocabulary (though the same grammar) when those relatives are around.

Steven Pinker, The Stuff of Thought

haggholm: (Default)

An analgam of someone I knew, various remarks (held in various degrees of conviction), and many things I have read, holds a position on objective fact that I find peculiar, to say the least. A conversation might artifically run somewhat like this:

Petter: …As an example of a sex difference, men tend to be stronger than women.

Amalgam: I find that offensive. Many women are as strong as, or stronger than many men. Besides, sex isn’t binary, you’re over-simplifying.

Petter: Yes, that’s all true, but that doesn’t change the fact that on average, men are stronger than women. Look, I’m not arguing that men are better; I’m just observing that—

Amalgam: What’s a ‘man’ and a ‘woman’, anyway? Lots of people have ‘abnormal’ chromosomes, like XXY or XXYX, or have other intersex conditions.

Petter: So what? I’m not trying to say “A is a man, B is a woman, therefore A is stronger than B”. I’m just making an objective observation—that, as a matter of fact, without attaching any valuation to it, men (“XY individuals with motile sperm”) are on average physically stronger than women (“XX individuals with functional ovaries”). The existence of exceptions and of individuals who do not fit neatly into this scheme does nothing to contradict that generalisation.

Amalgam: But your binary division is artificial. Sex isn’t binary. Your distinction isn’t useful. What is it good for? So what if we “know” about this difference between “sexes”, or other differences—how does that make anything better? It just creates more grounds for people to discriminate on.

At this point, of course, I am invariably rather close to beating my head against a wall. I am a rationalist and naturalist, I am fond of objective truth, and I am rabidly obsessed with precision; but let’s try to calm down and look at this in a detached fashion.

It is of course true that lots of people don’t fit into any given definition of “male” and “female”. Your body may not match the typical representation of your chromosomal configuration, or you may not have a common chromosomal configuration at all, or you may even be a chimera, with different chromosomes in different tissues. But so what? I’ve never come across a sane argument that my casually proposed definition does not match a great majority people, and if it matches a great majority of people, then it suffices to make generalisations.

It’s also true—painfully, obviously true—that the generalisation does not apply in every case; but then, that’s inherent even in the definition of a generalisation. Physical strength (like most natural attributes) is distributed in a continuous distribution, probably something like a bell curve; the “men” curve will have a higher mean than the “women” curve (that’s exactly what my claim says), but there’s lots of overlap. Certainly there are many women stronger than the average man, and vastly more women stronger than weaker-than-average men. But, again, that in no way contradicts the claim that the mean strength of men is greater than the mean strength of women.

Objectively, then, it seems pretty obvious that I was factually correct (quibbling about what’s ‘normal’ aside). The final objection, however, was that my distinctions (and conclusions) aren’t useful. In fact, one particular person has gone further and indicated, in so many words, that it is therefore better not to hold a belief, even if it happens to be factually correct, because (e.g.) it may affect undesirable social trends, such as (in this case) perhaps a condescending attitude of men toward women in contexts of tasks that require physical strength.

I find this objection ludicrous on several grounds. First, I don’t necessarily care about the social impact of knowledge in the abstract: That is, I am interested in things without any regard to their social impact. I think it’s tremendously interesting that the reason why the sky is blue is Rayleigh scattering, but as far as I know this has no impact on any social phenomenon at all—the sky, after all, is blue regardless of whether we know why or not; it remains blue regardless even of whether we know it or not. I think that knowledge for the sake of knowledge is a very noble pursuit, and that if having knowledge leads people to make bad moral choices, then we need to focus not on limiting knowledge, but on improving moral education—or perhaps rather on combatting errors in thinking such as the naturalistic fallacy, the inveterate tendency to conflate “is” with “ought”.

Second, the latter part of that objection applies to pretty much anything. Even if we decide not to investigate sex differences, and don’t teach anyone that men are (on average) stronger than women, the facts remain facts. Reality is objective, and whether we believe it or not has no impact thereupon. Thus, even if we try our best to ignore the facts, it still remains true that when push comes to shove, men are better at it (pushing and shoving, that is). Biology doesn’t give a damn about political correctness. The facts, therefore, are there to be rediscovered. I fail to see how hushing them up does anyone much good—only ‘well-intentioned’ people will consent to it, and malicious people who would base their malice on unfortunate facts are always able to do the research for themselves, with the considerable advantage that they can objectively demonstrate that they are right. By avoiding the knowing of unfortunate facts, we have silenced the discussion that might have taught us to properly deal with them.

Third, denying that differences exist seems to me counterproductive even in social contexts. Men are physically stronger than women. There are many other differences. So what? That doesn’t make men morally superior to women, any more than the differences make women morally more valuable than men. Not only should we be smart enough to treat people with dignity regardless of such biological differences, but the refusal to seems to me to cast aspersions on other interactions. If we pretend, in the interest of “equality”, that men were no stronger than women, for instance, then we are implicitly saying that equality of strength is socially and politically important—which makes me wonder what we’re supposed to think of people with physical conditions that cause their strength to atrophy. At this point, I may have to pull out something more controversial and brain-related, and point out that men consistently score higher than women on 3-dimensional visualisation tasks: If we deny this, are we not implicitly saying that individuals with inferior skill in this narrow domain are somehow more generally inferior? This, it seems to me, follows logically; therefore I find the premise reprehensible.

Fourth, given that (as per #2) there do exist real differences, having a statistical notion of what they are is the only way we can intelligently formulate policies to deal with them. It’s very easy to say that we shouldn’t, that we should treat everyone as an individual and not “assume” that we know things about them based on narrow and superficial criteria like sex or race—but while that’s certainly true in personal interaction, sometimes we do have to deal with masses of people. If I were to allocate a health budget, for instance, then knowing that black people are more prone to sickle-cell anemia than white people, I would allocate more money to deal with this particular problem in areas with a greater proportion of black people in the population. If we wish to raise awareness of conditions that affect predominantly men, or women, or black people, or white people, or fat people, or skinny people…then simply knowing about the general correlation may help us focus our concern on the more vulnerable segment of the population. Certainly, this does sadly leave highly exceptional people in the less-targeted segments, but since no health budget exists that can deal with every problem, isn’t it our moral duty to address the problem as well and as specifically as we can? (It also happens that the needless testing we’d have to perform on the ‘normal’ majority of people in order to find the very few with undetected ‘abnormalities’ would probably create more problems than it solves, even apart from budgetary exhaustion—“more testing” does not equal “better health”, and may often lead to reduced quality of life. Sad, but true.)

I am not going to tie this together into a conclusion—I think that there are several, and they are fairly neatly listed in point form. And, to be perfectly honest, my strongest visceral reaction to conversations of this kind is “That’s just preposterous!”, because I cannot wrap my brain around the idea that knowing something is a bad thing. If your ethics object to reality, then adjust your ethics, or change reality to match them—but actually change things; don’t just try to change perception. If (and this is hypothetical and quite possibly counterfactual) we observe that women are less educated than men, then the correct response is not to smile and assure the girls that they are just as good as the boys—no, the answer is to find out why and to do something about it!

As for those things which biology forces upon us, whether we like it or not—such as my running example of physical strength—well, so what? We can choose to deal with it in various ways. In an awful lot of contexts, it only makes sense to evaluate people on individual merit, anyway. (We might regard the cost of evaluating people of the generally-less-suitable segment of the population as an acceptable cost for fairness.) In other situations, it actually seems to make sense to freely accept that our conditions differ: So, for instance, very few people object to sex divisions in sports—and ss a fan of combat sports, I am glad: The muscular differences between men and women would make it a rather brutal affair, and watching a man fight a woman would just be unpleasant, even apart from cultural baggage. (In such cases, though, it does become difficult to know what to do about people who constitute exceptions—I don’t know whether the woman in the story has any chromosomal abnormality, but the fact that it is an issue is enough to note that it’s a troubling area.)

Finally, I want to make it very clear that I am perfectly well aware that many of the differences between the sexes are not “biologically determined” even probabilistically, but are strictly the products of our cultures and social conditioning. There are many worthy causes there, and I won’t say much about them because I’m not really qualified to speak intelligently on the matter. However, anyone who seeks to gain me or people like me as allies there (and it shouldn’t be too difficult) had best be careful about steering away from the post-modern muddle-headedness that would alter reality just by changing our perceptions, because the more you refuse to acknowledge the reality of those differences which objectively do exist, the less I am likely to take you seriously when you speak about those imposed on us by a patriarchal cultural heritage¹.

¹ This very explicitly does not apply to certain regular readers…


RSS Atom

Most Popular Tags