haggholm: (Default)
Thus spake WLC.

“Fine-tuning ergo God” is like saying “The odds of drawing just these ten cards are so small, it must be rigged!” when we don’t know the composition of the deck: A glib generalisation, as though every drawing of cards corresponded to a deck of Bicycle playing cards and a probability distribution we are, supposedly, intuitively familiar with. There’s a bait-and-switch here, since for all we know the deck we’re really concerned with is (meta?)physically constrained to nothing but straight flushes. For all we know, the deck might have only ten cards to begin with or the cards might be stuck together with string and Scotch tape.

Of course it’s entirely legitimate to wonder why the parameters of physics are just what they are (and on some level there is presumably a reason), but I find it highly suspect when someone asserts that they were a priori improbable—how exactly do you determine the probability? Can you demonstrate, from first principles of making universes, that there’s a wide range of possible parameters whereof the chemically productive parameters form a small proportion? I’m sure the cosmological community would be fascinated to learn the principles.

Point one—I must call them points, for they aren’t really reasons—point one is juvenile, point three is perversely ironic in the light of two millennia of unresolved theodicy (don’t you think the Cathars had a better idea, until the Catholics murdered them all?), point four is presumably included for the sake of hilarity alone (surely no one is expected to take it seriously?), and point five must have been added after Dr. Craig got drunk and forgot to activate GMail Goggles, but point two is offensive in its duplicity.

Oh well. For this particular atheist, Christmas—well, I think of it more as “juletid” in Swedish, precisely cognate with Yuletide, a pagan term that merged with Christmas when Jesus’s birthday was moved to mid-winter to co-opt older religious celebrations like Saturnalia and, elsewhere, Yule—was never much about religion but rather family, presents, a tree (of likely pagan origin), and good food (much of it based on pork and so presumably frowned upon by Jews like the Nazarene). Or at least, it was not about religion when I grew up. Now there’s always a heavy dose of news articles, editorials, and opinion pieces by Christians who hysterically complain that their holiday is under attack (because they’re not allowed a monopoly), that Jesus and Santa Claus are white, so there!, or (á la Craig) assert that people like me are echoing slogans rather than thinking. I don’t go pissing in his crèche, but ye gods! (both Jesus and the older myths he was based on), this editorialising gets on my nerves.

http://www.foxnews.com/opinion/2013/12/13/christmas-gift-for-atheists-five-reasons-why-god-exists/
haggholm: (Default)

Jaimie recently got me reading a couple of feminist books: First, an anthology called Bitchfest (mostly good, with only one or two contributions I found truly wanting, although a lot of it failed to resonate with me even though it was intelligent and well-written, because it deals chiefly with a pop culture that I do my best to avoid). Second, and currently, Naomi Wolf’s The Beauty Myth, which I gather is something of a classic. I’ve talked about feminism and body image issues with Jaimie a fair bit, and I’ve lightly skimmed some blogs here and there. And in my usual nitpicking way, I have come up with a nit.

Now, obviously I don’t argue with feminism in the conventional sense of—well, let me define it as “a movement advocating gender equality, focusing in particular on promoting the position of women up to equality” and hope I won’t meet with much argument. There are some things branded as feminism with which I take issue—difference feminism, for instance, or a lot of the tripe that seems to get passed around in some Women’s Studies programs. Don’t even get me started on “male and female ways of knowing”; I’ll just say that I agree with Richard Dawkins who ended a description of how women supposedly use intuition and feelings instead of reason and logic: …and other insults to women. It’s been pointed out that this fluff is basically Victorian, only now picked up by women who call themselves feminists…

But that’s all by the way. Basic feminism—equality is good, and it’s more specific than “egalitarianism” because it focuses on women as the currently more disadvantaged gender—is obviously a good idea. Calling out beauty myths, impossible standards, and harmful propaganda is a very worthwhile enterprise, and while we should remember that it isn’t completely one-sided and guys like me worry about our lovehandles and so on, I think it’s pretty clear that there’s more of that crap aimed at women, so focusing primarily on that seems entirely fair.

That said, there’s some standard feminist rhetoric that bugs me, even if it may be that it’s not much more than the phrasing. However, as regular readers of my blog may have had occasion to notice, phrasing is kind of a big deal to me; and as it’s my damned blog I will pontificate rant about it. (I won’t pontificate. I don’t pretend to be august or infallible, I don’t cover up pædophiles, and I have better taste in hats.)

Let me reiterate that. I’m talking about phrasing here, and do not therefore mean to dismiss the arguments that are so phrased. I am talking about consequences of implication, so read carefully before you assume that I am accusing people of making accusations; more likely I am saying that they are making unfortunate and probably unintended implications. Please keep that in mind, or this blog post will make me look far worse than it already does…

Back on track, then: A lot of feminist writings come across as antagonistic. At first blush this actually sounds reasonable, as there is certainly plenty to be angry about; and let it not be said that it is never appropriate. However, there’s a difference between anger and antagonism: Antagonism exists between antagonists. In my personal view, the enemies of feminism aren’t usually persons. Instead, they’re less tangible things: Groups of people in the aggregate, perhaps; social and cultural trends and traditions; moores and taboos. There’s a glass ceiling, but there isn’t a glass ceiling repair man. But whoever encounters the polemic is a person. Society doesn’t read blog posts; readers do. And when something is phrased in an antagonistic way, it’s awfully tempting to leap to the conclusion that it’s aimed at an antagonist (or, even if you consciously tell yourself that it isn’t, read it through that emotional filter).

I could talk about “privilege”, but the thing I want to rant about today is the common way of phrasing things as though they were intentional. “This is done to keep women down”, “the patriarchy responded to women by attempting to—”, and so on. True, intent is not fucking magic, but it still matters: Morally, and tactically, and, well, factually. Who is this Patriarchy that does things to keep women down (an explicit statement of intention)? Who are its members? Why, given that I am a man, was I not invited? Well, of course it’s not actually some shadowy cabal of old men writing the Protocols of the Elders of Chauvinism, but rather a set of attitudes, largely invisible even to most of its perpetrators—but then why personalise it as though it were a bunch of people? (Naomi Wolf uses a bunch of such language, although I am not accusing her of insanity: She explicitly says that this is not a conspiracy theory; it doesn’t need to be. But the language is still there. And I personally think that sort of language affects the debate.)

Perhaps it serves as a psychological crutch. Maybe it is easier to imagine that there are villains out there, figures of malice whom one can dream of fighting and defeating, rather than the less satisfying, more difficult, and infinitely more nebulous web of thoughts, opinions, and behaviours that truly are at fault. But I’m not usually a big fan of psychological crutches, and the fact that it’s incorrect seems like reason enough to dismiss it.

Well, often incorrect. There’s no denying that there are awful people in the world, after all. Let’s make a hierarchy of people, ranging from the vilest to the best:

  1. People who oppress women to oppress women out of malice or delusions of superiority. Christian fundies, honour-killing Islamists, MRAs, and moustache-twirling villains who tie women to railroad tracks.

    I’ve no sympathy for these people. Crucify them at will. It’s not like consciousness-raising or education will help: This category of people know damn well the harm they cause and do it anyway. Attack them as ideological enemies, because they are.

  2. People who oppress women knowingly but incidentally—for greed or profiteering, presumably, such as perhaps marketing execs who consciously design or push ideals, procedures, and so on that serve to exacerbate eating disorders, body dysmorphia, and poor self esteem. I think this is a slight degree less vile than those who simply want to keep women down, but only a slight one.

    I’ve no sympathy for these people either.

  3. People who oppress women incidentally who should know, but use rationalisation and cognitive dissonance to avoid knowing. And let’s face it, rationalisation and cognitive dissonance are things that humans excel at. Probably most of the marketers involved in designing the most horrible and damaging campaigns ever made never really thought they were doing harm. These ideals are already out there, they might think; it’s unfortunate but it’s not my fault and what does it matter if another drop is added to the bucket? Even a surgeon pioneering some godawful new procedure like removing pinky toes to fit into smaller shoes or rib removal for a slimmer waist (which may be apocryphal, but I did say “pioneering”) will surely find a way to rationalise it: The unrealistic ideals are already out there and women are already feeling bad for failing to live up to them. I can’t change that, but I can at least provide a service that allows them to meet them when nature won’t.

    A person like this does cause harm, but I’m not sure whether the same rhetoric is ideal. Accusing them of causing malicious harm, it seems to me, could make them re-examine what they are doing and come to their senses, but could also make them dig in their heels, protest defensively, and entrench their position. Speaking for myself, few things put me on the defensive more quickly than an unjust accusation, and if you want to make someone aware of the harm they are causing, surely putting them on the defensive—mustering all their powers of rationalisation—is not what you want to do.

    Still it’s hard to defend someone who sacrifices women’s toes on the altar of beauty and stiletto heels, and I won’t waste energy in the attempt; they aren’t worth it.

  4. Average schlubs like yours truly who cause harm incidentally, and not due to cognitive dissonance but sheer ignorance. I try my best, and I even read the occasional article by bloggers like Amanda Marcotte and Jen McCreight, and every post by Greta Christina…but I’m sure that I err in ways that I won’t enumerate because I can’t; I don’t see them. Isn’t that the whole point?

    Far be it from me to suggest that anyone cater to me; no one should have to bend over backwards to court me as an ally for basic social justice. However, avoiding antagonistic language isn’t bending over backards; it’s just standing up straight. I think it’s unfortunate and alienating, and more than a little hurtfull, when I feel addressed like an enemy, even incidentally and accidentally. The same argument as above applies, but more strongly. Dismiss me because of “male privilege” or as part of “the patriarchy” and I will feel like I’m being treated like an enemy, someone you want to drive away from your camp. I’m trying my best, however short I may come up.

  5. Activists who actively combat stereotypes.

    You go, guys and girls; I wish I had your energy.

  6. People completely free of bias.

    These are necessarily either fictional, comatose, or dead.

I think that the first two categories are pretty small, the first especially; at least in my part of the world. I think very few people really know and acknowledge the harm they do. I think that even the most egregious things that happen in the beauty industry, say, are more boiling frog-like: A cultural backdrop exists favouring, say, thinner women, and people build on that, each model or each advertiser wanting to go a little thinner, a little more “beautiful”, until before people realise what’s going on, we end up with terrible, harmful ideals and a cycle of emaciation that no one has the power to stop (without going out of business, succumbing to a more ruthless adversary). No villains, just people acting in honest ignorance, perhaps well-intentioned.

I do not think that beauty companies are really out to make women put themselves down. I think that they are out to make shitloads of money, which is precisely the goal a capitalistic system rewards them for seeking. I think that the more ruthlessly they do so, all else being equal, the more likely they are to succeed in the marketplace; and the fittest companies, best able to convince women that their products are needed, will survive. But I don’t think this is done out of misogyny. And I’m sure they’d sell just as hard to men if only the cultural backdrop we live in made it possible. It doesn’t; they can’t; so they don’t. But I don’t entirely buy some of the “commoditisation” arguments because that phrases it in language of intention, which I find misleading.

I don’t even think that big male power structures like corporate boards or governments are trying to keep women out. I’m not denying that they have these effects, but I do deny that they are trying. Why on Earth would they? It’s trivially true that a male-only government benefits men (presumably half of all qualified people are female, therefore excluding women gives cushy jobs to men occupying their seats), but I fail to see how this serves the interests of anyone in that government, or on that corporate board. They’re in it for themselves, after all, not for the sake of Mankind over Womankind. What has an old politician to lose from allowing women to make it into political office? By the time they can rise to the top levels and compete for his job he’ll be long dead and gone anyway, so as far as that’s concerned they’re irrelevant. I do not deny that there are surely misogynists in governments and corporate boards just as there are elsewhere, but I fail to see evidence or rationale for any widespread motivation that merits claims such as, to closely paraphrase something I read the other day (Bitchfest?), that the American government has a vested interest in suppressing women. Does it? How? How can a body of hundreds of people (or thousands, depending on how you define “government”) with competing political ideologies, some acting selfishly and all having to cater to public approval (slightly more than half of which is female), all share a vested interest in that?

Does this mean that the consequences are any less harmful? Of course not. Does it mean that people should get a free pass on their wrongs? Don’t be absurd.

But I think it is harmfully misleading, because it paints the wrong picture. If there are villains, it’s simple: Fight the villains. If there are people in power who struggle to suppress or oppress women, take them out of power. But if that’s not the case—if it’s no one’s intentions, if it’s instead the outcome of an impersonal system—then it’s a different problem and needs a different solution, education and consciousness-raising rather than taking out the villains.

Obviously feminists have done and still do a fantastic job of consciousness-raising and education that I could not hope to match in a thousand years even if more than ten people read my blog; I just find this particular detail grating.

I also think it’s unfortunate because it’s alienating. If the patriarchy is spoken of in terms of an oppressor (true, in an unconscious sort of way, but phrased as though intentional), and if I am accused of being part of the patriarchy (which I suppose is regrettably true, in that I presumably carry various thought and behaviour patterns with me that make me part of that general cultural problem), then—boom!—the language has become “you’re an oppressor; you are our enemy”. Reading up on feminism has helped me understand this language and realise that this is not what it means, but giving me the initial impression that I am the enemy you want to destroy is kind of a barrier to wanting to read up on it! I might contrast this to pieces by Greta Christina, which manage to describe the same issues without pulling punches but never using this kind of language. First I was told that I was part of the patriarchy that oppresses women; only much later did I find Greta Christina’s blog read her explanation of patriarchy as a set of embedded attitudes that affect and impact both men and women, and is appropriately labelled “pat-” only because women are the ones who get the rawest deal.

And this matters a lot to me because it makes me much more willing to re-examine myself. Telling me that I’m part of the evil patriarchy makes me initially think that you’re insane because I am a member of no such shadowy cabal; and secondly to think that you’re wrong because I know I’m well-intentioned and in no way interested in oppressing women. Telling me that there exists a set of pervasive attitudes that I express unknowingly makes me feel like I’m being invaded by some cultural, memetic disease, and like I ought to learn more about this disease and how to cure myself of it. If I accidentally step on your toe and you tell me, I will apologise profusely, but if you do so in language that to the uninitiated sounds like an accusation of assault, I will be self-righteously offended (especially if it’s not clear that I stepped on your toe to begin with). I’m not asking for brownie points when I make the slightest efforts; I just hope for a tonally neutral notification that I did something wrong.


This veered into being more about me towards the end, which was not my original intention but was perhaps inevitable, and as I said right at the very top, this is about my perception of rhetoric, phrasing, and implications; and thus necessarily becomes subjective. If I thought that feminist writers regularly wrote about actual purported Cabals Of All The Men producing grand designs to oppress women this would be quite a different post, after all!

Finally, before I trepidatiously post this and go into a cringe anticipatory of the feedback, I want to respond to one hypothetical retort: Maybe the language of intent just makes better rhetoric. Maybe, in spite of its potential to alienate as it has on occasion alienated me, and in spite of its fundamental inaccuracy, it’s just so valuable in its strong emotional charge, and so galvanising in its implicit image of an evil that is ultimately fightable, that it’s worth it.

To this I can say only that I’ll never condone untruths even if they are rhetorically convenient.


Of course, this whole thing makes me wonder whether I have a point at all (good or bad), or whether I am just uncommonly sensitive to certain nuances of phrasing that others wouldn’t even notice, even while I am tone deaf in other, remarkable ways.

haggholm: (Default)

The other day somebody used the term Uncle Tom to describe—well, as the dictionary defines it:

noun Disparaging and Offensive.

a black man considered by other blacks to be subservient to or to curry favor with whites.

This set me to thinking, because while the term was here used in a context where the dictionary definition works perfectly well and I had no argument with the argument, as it were, the existance of the term itself bothers me. (Have you noticed a tendency of my posts tagged “rants” of being associated with terminology? Caveat lector.)

I haven’t read Uncle Tom’s Cabin in many, many years; as I read a Swedish translation when I was but a child, it may have been abridged for all I can remember. But my vague recollection was of a book that told a frank and rather brutal story of slavery, with the titular character a tragic one. Maybe he was and maybe he was not a “subservient” one, but why has the title or eponymous character become so closely associated with slavery in a negative (i.e. pro-slavery, pro-subservience, evil) sense?

I spent some time reading about the book and find my opinions upheld by history. When Harriet Beecher Stowe wrote it, she did so explicitly to expose the evils of slavery: I feel now that the time is come when even a woman or a child who can speak a word for freedom and humanity is bound to speak… I hope every woman who can write will not be silent. It was a tremendous success, galvanised abolitionists, and gained an apocryphal rumour of being a trigger in the very Civil War (apocryphal, I reiterate, but to gain that rumour in the first place seems significant). It spawned great outrage, to the tune of not just letters of protest but 20–30 “anti-Uncle Tom” books written in the South to defend their detestable institution of slavery from Stowe’s assault. (And worse protests; she received a slave’s cut-off ear in the mail.)

Surely, then, this should properly be regarded as an intellectual salvo of considerable weight fired against the institution of slavery. Surely (even without the tangential argument that the book injected some feminist viewpoints) it must be regarded as progress! It’s true that some of the negative press associated with the title was earned—though neither by book nor author, as various stage productions based on the book were made with typically faint fidelity, ranging from more or less sympathetic to downright pro-slavery, and all staged with white actors in blackface. But it seems sad to allow this to take away from the impact of a great work. I’m not saying that Uncle Tom should be used as a term of praise or anything, but to use it as a pejorative seems like heaping undeserved insult on a step—one of many, one that had yet to reach the goal—but a good step in the civil rights movement.


Obviously I don’t subscribe to wholesale relativism wherein the racist stereotypes that are encoded in the work are somehow okay just because racism was the norm back then. Feel perfectly free to criticise them where you find them in the novel and expect no argument from me. Nonetheless I feel that looking at things in a historical context is necessary to gain perspective. Is it perfect? No, nothing is, and doubtless there are things in it which by today’s standards are appalling. But it was not written when today’s standards were around; it was written when there were actually slaves in the United States, and at the time it was progressive. I think that the measure of the author is that she rose above the rhetoric of her time. Who can do more? Probably we all do things unthinking that our descendants two hundred years from now will consider barbaric. Hopefully we do our best, question a little more than we were raised to, make a little progress.

The same kind of criticism is often raised against Charles Darwin, for example—he was a racist, the trope goes, referring to primitive people as savages and sounding very much Victorian and colonial. Well, yes, he was a Victorian and England was pretty damned colonial. But does he deserve being singled out for criticism? Well, he had a black man for a friend on his travels; wrote in notes of harsh condemnation of the institution of slavery; wrote of a black man who gave him lessons in taxidermy and whom he referred to as very pleasant and intelligent… It also seems from his writings that he attributes most or all of the difference between civilised and savage peoples to upbringing and education. Was he, nonetheless, racist in his attitudes toward savages? I would personally say yes, but far less racist than the norm; by Victorian standards he shines as a great progressive who should be lauded for shaking off the shackles of his surrounding culture and taking great strides toward a better worldview.


It seems that a great deal of discourse is willing to accept something as either a great work to be lauded in every detail, or as a flawed work of questionable morals; or (if you’re a hardcore relativist) acknowledging great-but-questionable works as great because context excuses everything.

Speaking of Darwin, there seems to be this curious parallel when discussing science rather than ethics, where popular culture has some strange obsession with “Darwin being wrong”. Well, so what? He wrote in the 19th century. He was wrong about an awful lot of things because science has progressed a great deal since then; but that does not detract from his accomplishments. Newton saw further because, he said, he stood on the shoulders of giants; modern biologists see further than Darwin, but his are the giant shoulders they stand upon. He should be praised for the progress he made, not criticised for the progress he didn’t make—it was still net progress, after all!

I’m sure there are loads of people who discuss things with more nuance, but I don’t seem to come across them, which alone I find troubling. Surely there is a very large place for works that are both; people and writings that we can praise as being wonderful and progressive even while we acknowledge that some of their teachings are dated.


This is tangential and I hesitate to add it because it seems like a cheap shot and largely irrelevant to the post, but lest I be thought a hypocrite: When I criticise biblical morality, as I sometimes do, by pointing out that the biblical Jesus failed to oppose slavery, I do not think that he deserves the excuse of historical context. Why not? Simply this: Because the myth has it that he was perfect. Real people like Harriet Beecher Stowe and Charles Darwin should be credited when they rise above their beginnings and make steps in the right direction. Supposedly-perfect people, when they fail to live up to perfection, are doing evil.

haggholm: (Default)

June 10, I wrote on Facebook:

There’s nothing quite like a major sporting event to dial my everyday alienation and misanthropy up to a seething hatred for humanity.

As I witness the howling, drunken masses lurching down the streets, banging on the (large, expensive) windows of streetside businesses, I imagine I know what the Romans envisioned when they said “Hannibal ad portas”; and wish I didn’t have to admit to membership of the same species.

I also had another exchange running something like this:

Person A: Y'all may hate it, but strangers don't usually look so happy to see and physically engage with you.

Me: When “physical engagement” comes in the form of a stranger rushing at me, screaming and waving their hands about, I don’t get the vibe of happiness—I feel physically threatened. I can’t read people like that; I have no understanding of their thought processes (if any), and I’ve had people jeer at my obvious lack of enthusiasm before: Only wonder how long some asshole is going to be drunk and irate enough to take a swing at me for it.

Person B: You have felt threatened by people wanting to high-five you?

Me: Maybe? The problem is that it’s a matter of perception. To you they are people wanting to high-five me. To me they are drunken idiots with all the predictability of dingos; with their concern for the safety of others demonstrated by window-banging (and yesterday also my kicked-over bike), and testified to by the sirens of police cars, fire trucks, and ambulances; and with their friendly inclusiveness to strangers shown by angry jeers at the slightest hesitation to exhibit a simular hysteria. But mostly it’s the unpredictability: You view them as “people wanting to high-five you”; I view them as people caught in the grips of some mass hysteria that I cannot begin to comprehend.

I’m not saying that my perception (except for the asides) is objectively true, but that’s not the point; I’m not trying to argue that I’m right and you’re wrong. I’m just saying that if you read my comment and thought that my thought process was “Ah, that guy wants to high-five me—shit, what do I do?”, you misunderstand the issue.

Today, I suppose, my concerns are vindicated, though I don’t feel particularly smug about it: Next time, my reply would simply be June 2011.

Yesterday, a team of people hired from various places around the world to represent Vancouver¹ in a pointless sport² lost a match in said pointless sport to a team hired from various places around the world (including Vancouver, I gather) to represent another city. In response to this, a bunch of contemptible rabble among the 100,000 people gathered downtown to watch the event engaged in various criminal acts, ranging from violence (there were stabbings, head traumas, many fights, and one fan of the opposing team somehow fell off a bridge) to widespread property destruction and looting. How bad was it? I’m not going to embed any picture or video because it’s too depressing. If you wish, you can browse YouTube videos or Google Images. It’s awful.

Today that area of downtown is dominated by the glimmer of shattered glass on the pavement, by boarded-up voids where >$1000 plate glass windows previously covered storefronts—and on a more heartening note, by crowds of gloved volunteers carrying brooms or garbage bags.

Last night my primary response was a sort of general, impotent anger at the whole thing, and (once I knew people were safely out of downtown) a fear that something important would get damaged: There were fires just across from the central branch of Vancouver Public Library, housing about 1.5 million books³; and of course there’s Academie Duello (where, too, I keep my rather expensive practice sword). Today, though, after seeing the aftermath, that anger is mostly replaced by sadness. Don’t get me wrong: I’d like to see the core looters lined up and shot in the face with rubber bullets and/or tear gas grenades, and I’m petty and vindictive enough to find the image of a rioter hit in the groin with a flashbang grenade hilarious. But mostly I’m sad that something like this happened, or could happen.

I also find it sort of frightening. True, out of the 100,000 or so hockey fans who thronged downtown yesterday, probably no more than a few dozen or scores were there to cause trouble—but when trouble began, the crowds became mobs, as I gather crowds do. The looting was not committed by only a few people, nor were all the people who were beaten assaulted by people who came there explicitly to cause trouble. A tiny minority came with malicious intent; a much larger minority was caught up in mob mentality; enormous numbers of people were not directly involved but stood around, taking pictures, and incidentally getting in the way of the police (the Vancouver Police Department has only something over a thousand officers total); it was so bad that the police denied entry to some areas to the fire department because they felt it was so unsafe that it was better to let some fires just burn.

And people wonder why I feel ill at ease and unsafe when hockey fans run howling at me? True, the proportion that go out and riot is tiny; the proportion who will take a woman’s refusal to high-five as a reason to slap her ass is tiny⁴; the proportion of people who will even yell Fuck you! in my face for having the temerity to read a book in public while waiting at a train station⁵ is still quite small—but when some hyped-up asshole runs at me, it’s not immediately obvious to me that they are one of the happy, harmless ones. I’m uncomfortable around large groups of people and don’t know how to interact even with placid ones; I’m unable to read large ones; I don’t want to be in the same city when I have any reason to think it may turn violent—and around hockey events, there’s always reason to think it may turn violent.

Someone mentioned the notion of Vancouver having its franchise revoked, being banned from the NHL. I hope that happens, and not merely out of spite and personal dislike of hockey (see ²), but also because if the city cannot run this sort of thing safely and without numerous injuries and massive property damage—in spite of what I gather was a rather heroic effort by the overwhelmed police force—then it should not run it at all.


¹ Cf. Mitchell & Webb.

² I personally dislike hockey and fail to see how anyone can possibly find it entertaining, but I concede that it’s really no more and no less pointless than many of the things that I enjoy for no sensible reason but simply enjoyment, of course, like watching sketch comedy, listening to music, and so forth. The point is not that people are stupid for liking hockey but that it’s a profoundly stupid thing to get so worked up about.

³ How many books are a human life worth, and vice versa? If they’re rioters or looters, I’d rate them about even with Harlequin romances.

⁴ Not hypothetical.

⁵ Not hypothetical, but only unpleasant, rather mildly compared to the above. The book, incidentally, was The Eighth Day of Creation.

haggholm: (Default)

I do not feel that “the Conservative Party of Canada” is a very apt name anymore. Since they insist that the very government of Canada be referred to as “the Harper Government”¹, surely the party is very definitively “the Harper Party”. (Unofficially, of course, it’s GOPNorth.)

In the wake of the disastrous election that perversely gave this Harper Party a majority of seats in Parliament, and so a majority government (in spite of ‘only’ 40% of the popular vote), I feel a need to reiterate just why I feel that the Harper Party is such a terrible choice. It’s certainly true that I do not lean Conservative: I feel that gay people are human beings and should be treated as such, I think that even poor people deserve health care, I think that women should be allowed to decide what goes on with their own bodies, &c., all of which are positions that Conservatives traditionally oppose. But all this is incidental: Even if I agreed with their policies, I would still oppose them.

The reason why the Harper Party is so abominable, you see, is not because of its position on any issues at all, but because it spits on democracy generally. Consider:

  1. Harper faced a vote of confidence that he knew he would lose, which would have spelled an end to his government and forced a re-election that he might lose. So he had the Governor General (representative of the Crown) prorogue Parliament, i.e. temporarily suspend them. That is—as I understand it—when the Harper Party was faced with a majority of the democratically elected representatives of the people of Canada wanting to vote them out of office…then they temporarily suspended voting.
  2. Harper, in fact, then did this again.
  3. The Harper Party did away with the long-form national census used to collect detailed demographic data on the people of Canada. This means that they ensured that the data necessary to formulate policies that benefit people optimally is unavailable. This has no conceivable benefit except perhaps plausible deniability in the event that they treat people poorly and need to claim that they had no idea.
  4. The Harper Party, qua government, was the first government in the history of Canada ever to be found in contempt of Parliament. (They withheld information. This was what forced the recent election. And after this, people still voted for them.)

This is not the full list of evils committed by the Harper Government. Some are arguably worse.² But the above are what incense me as attacks on the Canadian institution of democracy: Silencing Parliament, the representatives of the people, when they threatened Harper’s power; treating them generally as an obstacle to ruling the country rather than a vital part of the system. This should be troubling to anyone: Even if you (unaccountably) agree with the Harper Party’s policies, you should still recognise that a threat to the democratic institution is a threat to your country; that any legal precedent that allows this government to steamroll over Parliament will similarly allow future governments to do the same—governments you may not like.

I find it difficult to express how offensive this is to me. True, comparing this to fascist regimes like Nazis or the actual Italian Fascists is hyperbole—Harper’s steps in that direction are very tiny indeed. Nonetheless the direction of those steps is against democracy, and no one should support that, no matter their policies. It is for exactly the same reason why we should not (on the subject of hyperbole!) advocate assassination of Harper: The country would be better off without him, but it would set a precedent whose cost would far outweigh the good.


I don’t even know if there’s anything to hope for over the next four years. With a majority government, Harper and the Harper Party are finally free to start wrecking Canada properly, in ways that a minority position has previously prevented. Maybe if they proceed so egregiously awfully that they’re found in contempt of Parliament again, once or twice, people will finally get sick of them and vote against them—but my limited knowledge of Canadian politics suggests that the best we can hope for is that the Harper Party only ruins Canada moderately.

What do you reckon we’ll see first? Attacks on gay rights, on women’s rights, on public health care?


I should add that my knowledge of Canadian politics is very limited. I didn’t bother educating myself very closely for the simple reason that there’s nothing I can do about it: I’m not a citizen yet so I can’t vote, and I’m not personable so I can’t change anyone’s mind. Nonetheless I do care, as the preceeding angry screed testifies. If any of the above is based on misconception, feel abundantly free to correct me.


¹ On some level, it may be comforting for future governments composed of saner men and women to know that they will not be blamed: It wasn’t the Canadian government’s fault, it was the Harper government’s.

² Strong candidates: Illegal campaign funding, allowing a Canadian citizen to be tortured at Guantanamo Bay, lying, lying again, lying some more, killing science and climate protection, taking money from women and the poor in order to give to the rich, and so on. See also here and here.

haggholm: (Default)

A while back, I wrote a post about 3D movies and what I dislike about them. Note that I focus on the negatives, which does not mean that I think there are no positives—but I’m certainly not generally impressed, mostly because while I think true 3D graphic displays are a wonderful idea, I don’t think the cinema is the place for it. I mentioned problems like the fact that the faux-3D movie makes me expect parallax effects, which do not exist; and that I find the illusion of different focal depths in the image both tiring and irritating.

These problems could, certainly in theory and if not today, then certainly in future practice, be solved for interactive computer simulations—like games—by tracking head movements (to solve the parallax problem: recall this awesome hack) and eye movements, pupil dilations and contractions, to adapt image “fuzziness” to create a truly convincing illusion of focal depth. However, it would not easily lend itself to movies, because it requires adapting the image to the individual viewer, and besides it would be impossible by any technique or technology I know of to even capture the data, except for 3D generated CG.

In a letter to Roger Ebert, veteran editor Walter Murch has described another problem relating to focal depth that I hadn’t thought about:

The biggest problem with 3D, though, is the “convergence/focus” issue. A couple of the other issues – darkness and “smallness” – are at least theoretically solvable. But the deeper problem is that the audience must focus their eyes at the plane of the screen – say it is 80 feet away. This is constant no matter what.

But their eyes must converge at perhaps 10 feet away, then 60 feet, then 120 feet, and so on, depending on what the illusion is. So 3D films require us to focus at one distance and converge at another. And 600 million years of evolution has never presented this problem before. All living things with eyes have always focussed and converged at the same point.

True enough, and as the article goes on to say, this may well account for a good deal of fatigue and headaches that 3D moviegoers experience. I wonder how, even if, this could feasibly be solved by a single-user adaptive system. I suppose if the display used a technology like (very very low-power!) lasers whose angle of striking the eye could be varied depending on focal depth…

haggholm: (Default)

In a spectacular display of missing the point, a group of “security researchers” released a Firefox plugin called BlackSheep, designed to combat Firesheep by detecting if it’s in use on a network and, if so, warning the user.

To explain why this is at best pointless and at worst harmful, let’s recapitulate what Firesheep does: By listening to unencrypted traffic on a network (e.g. an unsecured wireless network), it steals authentication cookies and makes it trivial to hijack sessions on social networks like Facebook.

Let’s use an analogy. Suppose some auto makers, Fnord and eHonda and so on, were to release a bunch of remote controls that could be used to unlock their cars and start the engines. Suppose, furthermore, that these remote controls were very poorly designed: Anyone who can listen to the remote control signals can copy them and use them to steal your car. This takes a bit of technical know-how, but it’s not exactly hard, and it means that anyone with a bit of know-how and a bit of malice can hang around the parking lot, wait for you to use your remote, and then steal your car while you’re in the shop.

Now suppose a bunch of guys come along and say Hey, that’s terrible, we need to show people how dangerous this situation is, and they start giving away a device for free that allows anyone to listen to remotes and steal cars. What this device is to Fnord and eHonda remotes is exactly what Firesheep is to Facebook, Twitter, and so forth. What’s important to realise is that the device is not the problem. It does allow the average schmoe, or incompetent prankster, to steal your car (or use your Facebook account), but the very important point is that the car thieves already knew how to do this. Firesheep didn’t create a problem, but by making it trivial for anyone to exploit the problem, they generated a lot of press for the problem.

What the Firesheep guys wanted to accomplish was for Facebook and Twitter and so on to stand up and, in essence, say Whoops, we clearly need to make better remote controls for our cars. (It’s actually much easier for them than for our imaginary auto manufacturers, though.) True, Firesheep does expose users to pranksters who would not otherwise have known how to do this, but the flaw was already trivial to exploit by savvy attackers, which means that people who maliciously wanted to use your account to spam and so forth could already do so.

Now along come the BlackSheep guys and say, Hey, that’s terrible, the Firesheep guys are giving away a remote that lets people steal other people’s cars!, and create a detection device to stop this horrible abuse. But of course that doesn’t address the real point at all, because the real point has nothing to do with using Firesheep maliciously, but to illustrate how easy it was to attack a flawed system.

This is stupid for several reasons:

  1. If BlackSheep gets press, it might create an impression that the problem is solved. It isn’t, of course, firstly because Firesheep wasn’t the problem to begin with, and secondly because BlackSheep only runs in, and protects, Firefox.

  2. People running badly protected websites like Facebook could use BlackSheep as an excuse not to solve the real problem, by pretending that Firesheep was the problem and that problem has been solved.

  3. Even as a stop-gap measure, BlackSheep is a bad solution. The right solution is for Facebook and Twitter and so on to force secure connection. Meanwhile, as a stop-gap solution, the consumer can install plugins like the EFF’s HTTPS Everywhere that forces secure connections even to sites that don’t do it automatically. This is a superior solution: BlackSheep tells you when someone’s eavesdropping on you; HTTPS Everywhere prevents people from eavesdropping in the first place.

    Let me restate this, because I think it’s important: BlackSheep is meaningful only to people with sufficient awareness of the problem to install software to combat it. To such people, it’s an inferior solution. The correct solution is not to ask consumers to do anything, but for service providers (Facebook, Twitter, …) to fix the problem on their end; but if a consumer does anything, BlackSheep shouldn’t be it.

As I write this post, I hope that BlackSheep will get no serious press beyond a mention on Slashdot. It deserves to be forgotten and ignored. In case it isn’t ignored, though, there need to be mentions in the blogosphere of how misguided it seems.

Let’s all hope that Facebook, Twitter, and others get their act together; meanwhile install HTTPS Everywhere.

haggholm: (Default)

Pope Ratzinger feels that Nazism was an example of atheist extremism and that the Nazi tyranny wished to eradicate God from society.

Even in our own lifetimes we can recall how Britain and her leaders stood against a Nazi tyranny that wished to eradicate God from society and denied our common humanity to many, especially the Jews, who were thought unfit to live.

As we reflect on the sobering lessons of atheist extremism of the 20th century, let us never forget how the exclusion of God, religion and virtue from public life leads ultimately to a truncated vision of man and of society and thus a reductive vision of a person and his destiny.

Joseph Ratzinger (Pope)

I suppose he should know. Still, a certain Herr Adolf Hitler seems to have disagreed:

We were convinced that the people needs and requires this faith. We have therefore undertaken the fight against the atheistic movement, and that not merely with a few theoretical declarations: we have stamped it out.

I am now as before a Catholic and will always remain so.

Adolph Hitler (Führer)

The last quote is a bit dubious—Hitler was nominally Catholic, but the Nazi party embraced a lot of Pagan and mystic influences that his Catholic forbears would probably have liked to see burned at the stake. The Christian church bears responsibility for an awful lot of anti-Semitism (not just the Catholic church, of course: Martin Luther was perhaps even worse), but while Hitler enjoyed some of their doctrines, he was hardly a mainstream Christian—unless in the narrow sense of a mainstream Positive Christian, as the Nazi religious doctrine was called. And while the Catholic Church has been pretty widely criticised for not officially opposing the Nazi regime (though many individual Catholics and Catholic congregations did), it’s at the very least not obvious that this was not out of fear rather than doctrinal approval.

Still, it’s rather astonishingly ironic:

  • Hitler proclaimed himself a Catholic
  • Hitler boasted of having stamped out atheism
  • Hitler spoke widely of how he felt he followed in Jesus’s footsteps in fighting the Jews¹
  • The Catholic Church did not officially denounce Hitler or the Nazis (and has been widely criticised for it)
  • Pope Ratzinger was himself a member of the Nazi youth organisation, the Hitlerjügend, albeit this was probably pretty much compulsory
  • Pope Ratzinger now claims that the Nazi tyranny was not merely atheistic, but in fact an example of atheist extremism

¹ I never claimed he made much sense.

haggholm: (Default)

There are few things so sure to annoy me as hype. Among those few things, of course, is factual inaccuracy. For both of these reasons, the new phenomenon¹ of 3D movies annoys me.

I will concede that in a narrow, technical sense, these movies are indeed 3D in that they do encode three spatial dimensions—that is, there is some information about depth encoded and presented. However, I don’t think it’s all that good, for various reasons, and would be more inclined to call it, say, about 2.4D.

Our eyes and brains use various cues for depth perception. The obvious ones that leap out at me, if you’ll excuse the pun, are

  1. Stereoscopic vision
  2. Focal depth
  3. Parallax

Let’s go over them with an eye (…) toward what movie makers, and other media producers, do, could do, and cannot do about it.

1. Stereoscopic vision

Odds are very good that you, gentle reader, have two eyes. Because these eyes are not in precisely the same location, they view things at slightly different angles. For objects that are far away, the difference in angle is very small. (Astronomers deal mostly with things at optical infinity, i.e. so far away that the lines of sight are effectively parallel.) For things that are very close, such as your nose, the difference in angle is very great. This is called stereoscopic vision and is heavily exploited by your brain, especially for short-distance depth perception, where your depth perception is both most important and most accurate: Consider that you can stick your hand out just far enough to catch a ball thrown to you, while you surely couldn’t estimate the distance to a ball fifty metres distant to within the few centimetres of precision you need.

“3D” movies, of course, exploit this technique. In fact, I think of these movies not as three-dimensional, but rather as stereoscopic movies. There are two or three ways of making them, and three ways I’m aware of to present them.

To create stereoscopic footage, you can…

  1. …Render computer-generated footage from two angles. If you’re making a computer-generated movie, this would be pretty straightforward.

  2. …Shoot the movie with a special stereoscopic camera with two lenses mimicking the viewer’s eyes, accurately capturing everything from two angles just as the eyes would. These cameras do exist and it is done, but apparently it’s tricky (and the cameras are very expensive). Consider that it’s not as simple as just sticking two cameras together. Their focal depth has to be closely co-ordinated, and for all I know the angle might be subtly adjusted at close focal depths. I believe your eyes do this.

  3. …Shoot the movie in the usual fashion and add depth information in post-processing. This is a terrible idea and is, of course, widely used. What this means is that after all the footage is ready, the editors sit down and decide how far away all the objects on screen are. There’s no way in hell they can get everything right, and of course doing their very very best would take ridiculous amounts of time, so basically they divide a scene into different planes of, say, “objects close up”, “objects 5 metres off“, “objects 10 metres off”, and “background objects”. This is extremely artificial.

All right, so you have your movie with stereoscopic information captured. Now you need to display it to your viewers. There are several ways to do this with various levels of quality and cost effectiveness, as well as different limitations on the number of viewers.

  1. Glasses with different screens for the two eyes. For all I know this may be the oldest method; simply have the viewer or player put on a pair of glasses where each “lens” is really a small LCD monitor, each displaying the proper image for the proper eye. Technically this is pretty good, as the image quality will be as good as you can make a tiny tiny monitor, but everyone has to wear a pair of bulky and very expensive glasses. I’ve seen these for 3D gaming, but obviously it won’t work in movie theatres.

  2. Shutter glasses. Instead of having two screens showing different pictures, have one screen showing different pictures…alternating very quickly. The typical computer monitor has a refresh rate of 60 Hz, meaning that the image changes 60 times every second. Shutter glasses are generally made to work with 120 Hz monitors. The monitor will show a frame of angle A, then a frame of angle B, then A, and so on, so that each angle gets 60 frames per second. The way this works to give you stereoscopic vision is that you wear a pair of special glasses, shutter glasses, which are synchronised with the monitor and successively block out every alternate frame, so that your left eye only sees the A angle and your right eye only sees the B angle. Because the change is so rapid, you do not perceive any flicker. (Consider that movies look smooth, and they only run at 24 frames per second.)

    There’s even a neat trick now in use to support multiplayer games on a single screen. This rapid flicking back and forth could also be used to show completely different scenes, so that two people looking at the same screen would see different images—an alternative to the split-screen games of yore. Of course, if you want this stereoscopic, you need a 240 Hz TV (I don’t know if they exist). And that’s for two players: 60 Hz times number of players, times two if you want stereoscopic vision…

    In any case, this is another neat trick but again requires expensive glasses and display media capable of very rapid changes. OK for computer games if you can persuade gamers to buy 120 Hz displays, not so good for the movie theatre.

  3. The final trick is similar to the previous one: Show two images with one screen. Here, however, we do it at the same time. We still need a way to get different images to different eyes, so we need to block out angle A from the right eye, &c. Here we have the familiar, red/green “3D” glasses, where all the depth information is conveyed in colours that are filtered out, differently for each eye. Modern stereoscopic displays do something similar but, rather than using colour-based filter, instead display the left and right images with different polarisation and use polarised glasses for filtering. This reduces light intensity but does not have the effect of entirely filtering out a specific part of the spectrum from each eye.†

To summarise, there are at least three ways to capture stereoscopic footage and at least three ways to display it. Hollywood alternates between a good and a very bad way of capturing it, and uses the worst (but cheapest) method to display it in theatres.

2. Focal depth

All right, lots of talk but all we’ve discussed is stereoscopic distance. There are other tricks your brain uses to infer distance. One of them is the fact that your eyes can only focus on one distance at a time. If you focus on something a certain distance away, everything at different distances will look blurry. The greater the difference, the blurrier.

In a sense, of course, this is built into the medium. Every movie ever shot with a camera encodes this information, as does every picture shot with a real camera—because cameras have focal depth limitations, too.

The one medium missing this entirely is computer games. In a video game of any sort, the computer cannot render out-of-focus things as blurry because, well, the computer doesn’t know what you are currently focussing on. It would be very annoying to play a first-person shooter and be unable to make out the enemy in front of you because the computer assumes you’re looking at a distant object, or vice versa. Thus, everything is rendered sharply. This is a necessity, but it is a necessary evil because it makes 3D computer graphics very artificial. Everything looks sharp in a way it would not in real life. (The exception is in games with overhead views, like most strategy games: Since everything you see is about equally distant from the camera, it should be equally sharp.)

Personally, however, I have found this effect to be a nuisance in the new “3D” movies. When you add the stereoscopic dimension to the film, I will watch it less as a flat picture and more as though it truly did contain 3D information. However, when (say) watching Avatar, looking at a background object—even though stereoscopic vision informs me that it truly is farther away, because my eyes receive the same angle—does not bring it into focus.

This may be something one simply has to get used to. After all, the same thing is in effect in regular movies, in still photography, and so on.

Still, if I were to dream, I should want a system capable of taking this effect into account. There already exist computers that perform eye-tracking to control cursors and similar. I do not know whether they are fast enough to track eye motion so precisely that out-of-focus blurring would become helpful and authentic rather than a nuisance, but if they aren’t, they surely will be eventually. Build such sensors into shutter glasses and you’re onto something.

Of course, this would be absolutely impossible to implement for any but computer generated media. A movie camera has a focal distance setting just like your eye, stereoscopic or not. Furthermore, even if you made a 3D movie with computer graphics, in order to show it with adaptive focus, it would have to simultaneously track and adapt to every viewer’s eye movements—like a computer game you can’t control, rather than a single visual stream that everyone perceives.

3. Parallax

Parallax refers to the visual effect of nearby objects seeming to move faster than far-away ones. Think of sitting in a car, watching the light poles zoom by impossibly fast, while the trees at the side of the road move slowly, the mountains only over the course of hours, and the moon and stars seem to be entirely fixed. Parallax: Because nearby objects are close to you, your angle to them in relation to the background changes more rapidly.

Of course, in a trivial sense, every animated medium already does capture this; again, it’s not something we need stereoscopic vision for. However, at close distances, a significant source of parallax is your head movement. A movie can provide a 3D illusion without taking this into account…so long as you sit perfectly still, never moving your head while a close-up is on the screen.

As with focal depths, of course, this is viewer-dependent and completely impossible to implement in a movie theatre. However, it should be eminently feasible on home computers and game systems; indeed, someone has implemented headtracking with a Wii remote—a far more impressive emulation of true three-dimensionality than any amount of stereoscopic vision, if you ask me.

Combined with eye tracking to monitor focal depth, this would be amazing. Add stereoscopic images and you’d have a perfect trifecta—I honestly think that would be the least important part, but also the easiest (the technology is already commercially available and widespread), so it would be sort of silly not to add it.

Thoughts

After watching a “3D” movie or two, I have come away annoyed because I felt that the stereoscopic effect detracted rather than added. Some of this is doubtless because, being who I am, the hyped-up claim that it truly shows three dimensions properly² annoys me. Some of it, however, is a sort of uncanny valley effect. Since stereoscopic vision tantalise my brain into attempting to regard these movies as three-dimensional, it’s a big turn-off to find that there are several depth-perception effects that they don’t mimic at all. If a movie is not stereoscopic, my brain does not seem to go looking for those cues because there’s no hint at all that they will be present.

Of course, it may just be that I need to get used to it. After all, “2D” movies³ already contain depth clues ([limited] parallax, [fixed] focal depth differences) without triggering any tendency to go looking for more. I haven’t watched a lot of stereoscopic imagery, and perhaps my brain will eventually learn to treat them as images-with-another-feature. For now, however, adding stereoscopic information to productions that can’t actually provide the full 3D visual experience seems to me rather like serving up cupcakes with plastic icing: It may technically be closer to a real cupcake than no icing at all, but I prefer a real muffin to a fake cupcake.


¹ It’s at least new in that only now are they widely shot and distributed.

² Technically all movies do depict three dimensions properly, but these new ones are really looking to add the fourth dimension of depth to the already-working height, width, and time.

³ Which are really 3D; see above.

† This should not have needed to be pointed out to me, as I have worn the damned polarised things, but originally I completely forgot them and wrote this as though we still relied on red/green glasses. Thanks to [livejournal.com profile] chutzman for the correction.

haggholm: (red)

Caveat lector: This is a rant with a lot of intentional hyperbole.


During my attempts at figuring out what phone to get next, I spent some time on manufacturers’ websites. This is always a frustrating experience, because phone manufacturers tend not to publish very detailed information (at least not on any pages I came across), and because my questions tend to be a bit arcane (Will this phone allow me to subscribe to an LDAP directory as an address book?).

None, however, managed to enrage me as much as Apple’s website, which appears to be made of fluff and chromed trimmings. The technical content amounted roughly to We make a phone, with pages of filler largely consisting of We are awesome and we make awesome stuff. I don’t want fluff. I don’t want marketing-speak.

Marketing speak doesn’t work on me, because holy shit, I’m not that stupid. Surely I can’t be very unusual in picking up on this? If somebody tries to sell me something based on their assertions that it’s awesome and cool people use it, I will tell them to fuck off. If you want to sell it to me, tell me about the features it has and hand me a spec sheet. I don’t mean eighteen different pages that bury various technical details in fluff, and one annoyingly laid-out page with some tech specs; I mean a single, clean page where the features are enumerated and I can actually get a solid sense of what the damned thing does. The lack of this sort of thing—which is standard issue in the PC world where I am used to buying hardware—seems to express contempt for my demographic, i.e. People who want convenient access to information on what, exactly, it is that they are buying, before they buy it.

I feel like they are being condescending in that, insofar as the advertising is directed at me, they are saying either We believe that you are stupid enough to buy our product based on the shit we’re telling you, or We believe that you’re too stupid to grasp any of the real information, so we‘ll give you the information equivalent of crome-plated turds instead. (It’s that or We don’t have a good product, and they don’t seem to believe it.) Of course, the reality is that their marketing isn’t aimed at people like me, but that message isn’t terribly helpful either: We don’t give a shit about you or your kind, and if we come off as condescending or offensive, who cares? You’re just a nerd, nobody gives a damn.

Admittedly, it’s possible that they just don’t support anything I care about—good IMAP support, Google Calendar sync, LDAP, etc., and so just don’t have any information to share.

It also annoys me that the website contains misinformation. For instance, they claim that the iPhone has a standby time of up to 300 hours, which is literally true, but only in the sense that up to doesn’t actually specify a lower limit. People I know who use the things seem to opine that they need to be recharged on a nightly basis. Two days, 48 hours, is too long, so less than 15% of the advertised standby time seems truly realistic even for users who do very little actual calling. Of course you can’t trust manufacturer stats, but at least in the world of laptops I can usually trust them to get their numbers within the right order of magnitude.


The phone itself has more problems—the ludicrous lack of copy/paste, the fact that you can’t even run software updates without running iTunes (you not only have to have a PC, you also have to run an OS Apple bothers porting iTunes to)—but that’s not the point of this rant…


The truly aggravating factor is that, through no fault of Apple’s marketing department (whom I consider roughly equivalent to that of the Sirius Cybernetics Corporation), I can’t discount the product. Their marketers and web people may all be assholes, but too many people whose judgements I trust and whose opinions I care about claim that they make good devices, that the iPhone itself is actually a good product. And it may well be…and so I can’t just dismiss it…and so I have to seek out information about it, regardless of what the search may do to my blood pressure. One thing is for sure, though: If I ever buy an Apple product, it will be in spite of their marketing, and very grudgingly. I might also have to scrape off the logo to live with the shame.

haggholm: (Default)

Postgres:

somedb=> select date('2009-05-27') + 7;
  ?column?  
------------
 2009-06-03
(1 row)

MySQL:

mysql> select date('2009-05-27') + 7;
+------------------------+
| date('2009-05-27') + 7 |
+------------------------+
|               20090534 | 
+------------------------+
1 row in set (0.00 sec)

My current task, which involves date calculations on items in the database, is going to be a bit complicated by the fact that MySQL’s date arithmetic sophistication is such that it thinks that one week from today is May 34.

Update: I can, of course, and probably will use MySQL’s builtin functions (DATE_ADD() et al), but this forces me to use non-standard functions rather than standard SQL operations. (I will get away with this because, and only because, this module is restricted to MySQL only, unlike our core system.) Furthermore, I fail to see, if they have implemented the proper arithmetic in functions, why they left the operations with a completely idiotic default.

Star Trek

May. 9th, 2009 09:22 pm
haggholm: (Default)

I just went to see the new Star Trek movie. I am no Trekkie, but it was pretty good and I enjoyed it. I don’t have very much to say about it…except the one crime against physics that made it difficult for me to take certain critical parts seriously:

Cut for spoilers )
haggholm: (Default)
// 1.
$map[$value] = ($value == $req['value']) ? 1.0 : 0.0;

// 2.
$map[$value] = ($value == $req['value']) ? 1.0 : 0;

Can anyone think of any reason whatsoever why these two statements should behave differently? If you had told me they would, I would have laughed derisively. And yet, PHP 5.2.6† at least thinks that they are not merely different, but critically so: While (2) works, (1) results in a syntax error:

Parse error: syntax error, unexpected T_DNUMBER in [...].php on line 232

Note that

  1. the literal 0.0 is not illegal in general, and
  2. the statement fails with other floating-point literals, too—it may be irrelevant to write 0.0 rather than 0, but I also couldn’t write 0.5 if that were what I needed.

What the hell is this lunacy‽

Update: This must be a bug, not (another) idiotic design feature: It raises a parse error when I run it through Apache/mod_php‡, but not with the CLI version of the PHP interpreter. On the other hand, why on Earth should the two use different parsers…? The mystery only deepens.

petter@petter-office:~/temp$ php --version
PHP 5.2.6-2ubuntu4 with Suhosin-Patch 0.9.6.2 (cli) (built: Oct 14 2008 20:06:32)
Copyright (c) 1997-2008 The PHP Group
Zend Engine v2.2.0, Copyright (c) 1998-2008 Zend Technologies
    with Xdebug v2.0.3, Copyright (c) 2002-2007, by Derick Rethans

‡ I often wonder if it isn’t really mod_intercal. PHP is but a PLEASE and a COME FROM away from being INTERCAL 2.0 (for the Web).

haggholm: (someone is wrong on the internet)

It’s a sad and extremely frustrating thing when someone mistakenly thinks that they understand logic. Never mind the context or subject matter; suffice to say that I was addressing the logical form of an argument (which was invalid—the argument was begging the question) whereas this individual thought that I was addressing the issue as a whole, in spite of my repeatedly telling him that I was talking about the strict logic.

The problem turned out to be that he had no idea what strict logic really is. The following is a lightly trimmed and reformatted (but not otherwise edited) extract of part of the discussion.


I really find it difficult to follow what you consider to be valid logic.

http://www.iep.utm.edu/v/val-snd.htm will give you a primer. Read and digest.

I read your primer for about two seconds and I found this.

"A deductive argument is sound if and only if it is both valid, and all of its premises are actually true. Otherwise, a deductive argument is unsound."

I disagree. I'd say that the soundness of an argument is a measure of its deductive validity.

Then you have no idea what you are talking about. "Sound" is a technical term in formal logic, not subject to debate or interpretation. You may as well say that you think that the equivalence of two additive expressions is a measure of its approximate satisfaction of your requirements -- it's nonsense; "soundness" and "equivalence" are formal terms (in logic and mathematics, respectively) with very precise definitions.

If the premises are true and the deduction is valid then that makes the deduction also true, doesn't it. A sound argument only needs to have relevant premises and valid reasoning, in my opinion.

You don't get to inject your own opinion of the meaning of "sound", "valid", "plus", "minus", or "equals". If we are speaking of logic, you may safely assume that we are using the terminology of logic.

An argument therefore would be sound if it addressed a problem and came up with a reasonable and relevant solution. Since we don't always know whether premises are actually true when we use them in deductive logic, we need a general adjective which indicates that the logical processes of an argument have been correctly folowed, and I'd say that adjective could be "sound". So generally, "soundness" refers to the reasoning processes and not to the truth of the premises, which could always be in doubt.

Also there is a comma, placed incorrectly before the word "and". What's more, unless the word "actually" is supposed to indicate an element of surprise, it's redundant.

So I don't think I'll read your logic primer further, thankyou. I don't think I need it.


Clearly not…

haggholm: (someone is wrong on the internet)

When I clicked a link and was transported to a column on Proposition 8 by Orson Scott Card, I was fully prepared to be offended. I didn’t expect to be so amused. For a writer so erudite, his arguments are remarkably vacuous. I honestly expected him to make a good case for a bad cause—not so. How can the author of Ender’s Game produce such vacuous drivel?

The premise of this editorial is that he promised to write a set of secular arguments to ban gay marriage, since even he realises that non-Mormons won’t be convinced by We Mormons think God doesn’t like it. I will not quote it in full—you have the link above—but will cite the parts that made me want to burst out with derisive retorts.

Quoted and indented parts are from Mr. Card’s column; the commentary is mine.


The first and greatest threat from court decisions in California and Massachusetts, giving legal recognition to "gay marriage," is that it marks the end of democracy in America.

These judges are making new law without any democratic process; in fact, their decisions are striking down laws enacted by majority vote.

Democratic process is great. I really think that constitutional, representative democracy is the greatest…well, the least terrible political system anyone has ever invented. However, a constitutional democracy does not base all of its laws and decisions on popular vote—if it did, it wouldn’t need a constitution (except to codify voting procedures and ensure that they are binding). Any civilised democracy has laws in place to protect the rights of minorities lest they be oppressed by majorities sufficiently large to win elections. Unpopular minorities need this protection sometimes—like gays. Or Mormons.

We already know where these decisions lead. We have seen it with the court decisions legalizing abortion. At first, it was only early abortions; within a few years, though, any abortion up to the killing of a viable baby in mid-birth was made legal.

Really?

I presume that he’s talking about partial-birth abortions; it certainly sounds like it. Now, partial-birth abortion is a stupid term to begin with; it’s not a much of a medical term, but a political one popularised by the opponents of late-term abortions. During a partial-birth abortion, the fetus is removed via the cervix, the same path as a birth, but that doesn’t mean that natural birth has begun. It’s one of the procedures used for abortions after the 21st week; it’s used in 15% of those cases. This means that it’s a procedure used for fetuses some of which don’t have any brain activity, let alone higher brain functions, consciousness, or anything psychologically human. Additionally, the procedure is used to remove fetuses that die of natural causes—it’s a method for removing a fetus, not killing it (although for abortive purposes, the fetus is of course killed before it is removed).

But Mr. Card’s rhetoric appears to take an already deliberately provocative term for this procedure and apply it by the most horrifying interpretation of the term itself, ignoring any facts about the procedure, to make it sound as though American courts allow doctors to murder infants in the delivery room (and of course legalising gay marriage leads down the same path). This isn’t merely dishonest; it’s ludicrous.

How dangerous is this, politically? Please remember that for the mildest of comments critical of the political agenda of homosexual activists, I have been called a "homophobe" for years.

This is a term that was invented to describe people with a pathological fear of homosexuals -- the kind of people who engage in acts of violence against gays. But the term was immediately extended to apply to anyone who opposed the homosexual activist agenda in any way.

A term that has mental-health implications (homophobe) is now routinely applied to anyone who deviates from the politically correct line. How long before opposing gay marriage, or refusing to recognize it, gets you officially classified as "mentally ill"?

I agree that it’s not a very accurate term. I commiserate. I feel the same way about using the term organic to mean not grown using chemically synthesised pesticides. However, given the prevalence of the terms in question, I think homophobe is roughly as subject to misunderstanding in context as the word organic applied in relation to produce.

Here's the irony: There is no branch of government with the authority to redefine marriage. Marriage is older than government. Its meaning is universal: It is the permanent or semipermanent bond between a man and a woman, establishing responsibilities between the couple and any children that ensue.

There we have it: The meaning of marriage is universal. For starters, it’s either permanent or not permanent. It’s between one man and one woman, or occasionally one man and multiple women. It doesn’t occur across racial barriers, or maybe it does; and the groom’s family always pays a price to buy the bride, or maybe the bride brings a dowry, or maybe neither happens.

Either way, you see what I mean: It’s universal.

The laws concerning marriage did not create marriage, they merely attempted to solve problems in such areas as inheritance, property, paternity, divorce, adoption and so on.

[...]

No matter how sexually attracted a man might be toward other men, or a woman toward other women, and no matter how close the bonds of affection and friendship might be within same-sex couples, there is no act of court or Congress that can make these relationships the same as the coupling between a man and a woman.

the same ≠ morally equivalent ≠ legally equivalent

Women are not the same as men; does that mean they should not have equal rights? Black people aren’t the same as white people; does that mean they should not have equal rights?

There is no natural method by which two males or two females can create offspring in which both partners contribute genetically. This is not subject to legislation, let alone fashionable opinion.

Thanks for telling us this. I’m sure the world was just about dying from uncertainty on this matter.

That many individuals suffer from sex-role dysfunctions does not change the fact that only heterosexual mating can result in families where a father and a mother collaborate in rearing children that share a genetic contribution from both parents.

Ah, there we have it! Only heterosexual couples (out of all couples) can produce and rear children (factually true, I agree); therefore, because X, only heterosexual couples should be allowed to marry.

The only X that makes this argument hold is X = only couples likely (or at least able) to produce and rear children should be allowed to marry. Clearly, this must be Mr. Card’s position.

When a heterosexual couple cannot have children, their faithful marriage still affirms, in the eyes of other people's children, the universality of the pattern of marriage.

…What, are we making exceptions already? Even though marriage is for procreation only is the only logical piece that fits, we are to make an exception because it’s just so important that children realise that, universally, marriages are usually between a man and one or more women, are or are not permanent, and do or do not entail the bride’s family, or possibly the groom’s family, paying a price?

Note also a rather sneaky introduction of the naturallistic fallacy here. Earlier, he was saying that this is what marriage is and has always been. Now, he proceeds as though he had established that this is what marriage should be. This does not follow from that.

We need the same public protection of marriage that we have of property. If we did not all agree that people continue to own things that are not in their immediate possession, then you could not reasonably expect to come home and find your house unoccupied.

We agree, by law, to make it a crime to take what belongs to others -- even when you need it more than they do. Every aspect of our lives is affected by this, and not for a moment could a society exist that did not protect the right of property.

Marriage is, if anything, more vital, more central, than property.

Husbands need to have the whole society agree that when they marry, their wives are off limits to all other males. He has a right to trust that all his wife's children would be his.

Wives need to have the whole society agree that when they marry, their husband is off limits to all other females. All of his protection and earning power will be devoted to her and her children, and will not be divided with other women and their children.

Apart from the rather obvious conclusions that men own women, and women are exclusive property and protegées of men (in spite of some of the universally identical marriages being polygynous, including within Mr. Card’s own deranged religion), I suppose the point is that (as Jesus taught us, maybe?) we shouldn’t really care about other people’s children, and (literal) bastards deserve no consideration whatsoever, because all that is important for a child to matter to a parent is genetic proximity.

Wait, wasn’t he saying something about adopted children? (Okay, you may not know this—I didn’t bother to cite it. You can check the article, or take my word when I say that Mr. Card is A-OK with adoption.) How does this tie into a necessity that the children he raise be biologically his?

These two premises are so basic that they preexist any known government. In most societies through history, failure to live up to these commitments has led to extreme social sanctions -- even, in many cases, death.

And if people two thousand years ago killed people for it, it must be morally wrong.

Only when the marriage of heterosexuals has the support of the whole society can we have our best hope of raising each new generation to aspire to continue our civilization -- including the custom of marriage.

Rephrased in semi-symbolic logic:

Only if X is supported by society will society continue to value X.

Similarly, an old-time southerner might say, Only when the segregation of blacks and whites has the support of the whole society can we have our best hope of raising each new generation to aspire to continue our civilization -- including the custom of racial segregation.

Because when government is the enemy of marriage, then the people who are actually creating successful marriages have no choice but to change governments, by whatever means is made possible or necessary.

Am I to understand that Mr. Card feels that civil war is a price he’s willing to pay to stop gay people who love each other from achieving legal rights commensurate with those of straight people who love each other?

Society gains no benefit whatsoever (except for a momentary warm feeling about how "fair" and "compassionate" we are) from renaming homosexual liaisons and friendships as marriage.

Aren’t justice and compassion held to be rather high virtues in most moral systems, including both secular humanism and most Christian sects? I suppose Mormons are different. Personally, I disagree; I think that justice and compassion are worth making sacrifices for, so that even if gay marriage were detrimental for society as a whole—which I do not believe for a moment—I expect I should still continue to support it, because some goods come only at a price.

Benjamin Franklin said it well: They who can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety. I rank justice right up there with liberty, and consider compassion to be a pretty high virtue, as well.

Married people attempting to raise children with the hope that they, in turn, will be reproductively successful, have every reason to oppose the normalization of homosexual unions.

…Because gay marriage causes infertility and/or low sperm count in straight people?

It's about grandchildren. That's what all life is about. It's not enough just to spawn -- your offspring must grow up in circumstances that will maximize their reproductive opportunities.

Personally, I think that life is about a lot of things other than reproducing. Indeed, I think that anyone who disagrees with me is an immoral and frankly terrible person. I also note that a straightforward implication of what Mr. Card here suggests is that infertile and sterile people have no purpose in life.

In a strict biological sense—if we speak of meaning in life strictly in terms of biological imperative—then he’s closer to being right (still wrong, of course, but not nearly as wrong). However, I thought we were talking about morality, which usually includes concepts like rising above our animal nature (to borrow a tedious cliché).

How long before married people answer the dictators thus: Regardless of law, marriage has only one definition, and any government that attempts to change it is my mortal enemy. I will act to destroy that government and bring it down, so it can be replaced with a government that will respect and support marriage, and help me raise my children in a society where they will expect to marry in their turn.

Yes, I guess he is advocating revolution and civil war to stop the gays. He seems rather less sane than I thought, and I wasn’t being generous to begin with.

Biological imperatives trump laws.

Then jealousy and rage trump laws against murder. Greed and desire trump laws against rape and theft. This isn’t a recipe for morality, but for anarchy, a Hobbesian State of Nature where life is nasty, brutish, and short.

I knew that Orson Scott Card was a man I profoundly disapprove of and disagree with, due to opinions he holds on religious grounds. However, if this is the best effort he can make to frame our arguments in completely secular terms; if this is an example of what he considers his compelling secular arguments in favor of giving permanent heterosexual pairings a monopoly on legally recognized status in all societies, then he’s not just blinded by religion—as many otherwise intelligent people are, compartmentalising their beliefs—he is also quite incapable of logic, stupid, and possibly insane.

haggholm: (Default)

Perhaps influenced by Richard Dawkins, I find the concept of punctuated equilibrium in evolutionary biology to be irritatingly overhyped. Without going into extremely technical details, I will cite Wikipedia—very deliberately: Regardless of technicalities, this is the popular perception of the idea. Emphasised text highlights the P.E.-ists’ understanding of classic Darwinian theory.

Punctuated equilibrium is a theory of evolutionary biology which states that most sexually reproducing populations experience little change for most of their geological history, and that when phenotypic evolution does occur, it is localized in rare, rapid events of branching speciation (called cladogenesis).

Punctuated equilibrium is commonly contrasted against the theory of phyletic gradualism, which states that evolution generally occurs uniformly and by the steady and gradual transformation of whole lineages (anagenesis). In this view, evolution is seen as generally smooth and continuous.

In 1972 paleontologists Niles Eldredge and Stephen Jay Gould published a landmark paper developing this idea. Their paper was built upon Ernst Mayr's theory of geographic speciation, I. Michael Lerner's theories of developmental and genetic homeostasis, as well as their own empirical research. Eldredge and Gould proposed that the degree of gradualism championed by Charles Darwin was virtually nonexistent in the fossil record, and that stasis dominates the history of most fossil species.

Compare this to the following, where emphasised text marks Darwin’s understanding of his theory.

…I must here remark that I do not suppose that the process ever goes on so regularly as it is represented in the diagram, though in itself made somewhat irregular, nor that it goes on continuously; it is far more probable that each form remains for long periods unaltered, and then again undergoes modification. Nor do I suppose that the most divergent varieties are invariably preserved: a medium form may often long endure, and may or may not produce more than one modified descendant…

Charles Darwin, On the Origin of Species

I don’t know which edition of the Origin my copy reproduces. The first edition was published in 1859; the sixth (and final) edition that Darwin produced was published in 1872. By the most generous estimate, then, it seems to me that Darwin came up with a version of the idea a century before Gould and Eldredge—which makes the notion of punctuated equilibrium being a novel, revolutionary, or landmark idea seem fairly ridiculous. It may be—in fact, almost certainly is—correct, but doesn’t deserve the press it has received, and it’s Darwin, not Gould, who should get the credit.

haggholm: (Default)

I’m a big fan of Gnome, but I know that KDE has plenty of adherents, and while I didn’t like KDE 3.x, I figured that now that it has reached version 4.1, it was time to give it another shot (it being widely agreed that KDE 4.0 just Was Not There Yet). So I read about Gentoo’s new sets feature, worked around a few bugs, and installed KDE 4.1. While I’ll shortly give you a brief list of grievances, the very very short version is that I’m currently uninstalling it and hoping that I can somehow undo the damage.

  • KDE 4.1 is very pretty. I’ll give it that.
  • It doesn’t understand multiple displays very well—I use NVidia’s TwinView system, which Gnome handles beautifully. KDE 4.1 sees it as one huge display…sort of.
  • Panels (like the task bar with a start-type button, and so on) don’t span both monitors. However, because KDE 4.1 doesn’t see the monitors as quite separate, either, there’s no way to configure a panel to be on a specific screen. I like my panels on my main screen. In Gnome, this is easy. In KDE 4.1, I spent hours failing to make it work.
  • The KDE 4.1 version of KMail, the mail agent, has one fatal flaw in its IMAP handling: When I open it, leave it open, and mark messages as read on another computer, there’s no obvious way to make it recognise those messages as read. Reloading the folder doesn’t do it. Restarting KMail doesn’t do it. Thunderbird and Gnome’s Evolution have no problem with this—in IMAP, read status is a server status, not client-side!
  • Konqueror still makes me want to use Firefox. So I do.
  • I am not moving away from Pidgin.
  • My main applications being GTK-based, and the menu interface of KDE 4.1 unfortunately annoying the hell out of me, I decided to return to good old familiar Gnome.
  • Installing KDE 4.1 broke my Gnome themes! Back in Gnome, things don’t look right. Why KDE saw fit to mess with Gnome theme settings, I don’t know.
  • Installing KDE 4.1 broke my menus! My menus are full of KDE items, with many important Gnome items gone. Notably, the settings items for things like theming and appearance are nowhere to be found, which is very irritating given the above item.

Having uninstalled all the kde package sets and removed all packages whose names start with a k from my package list, it’s now time to try to get my beautiful Gnome system back in order. It took no effort at all to get it to work in the first place, so one small comfort is that at the cost of losing theme settings, custom menu setups, and some application settings, I can at least blitz the settings and get a sane default.

With Gnome, that is.

Brief rant

Aug. 27th, 2008 04:23 pm
haggholm: (Default)

Dealing with a large volume of bugs in one day is kind of stressful—we got sixteen show-stopper bug reports today, half of which were duplicates, and all of which were initially blamed on my refactorings. Some of them were due to me (so I'm annoyed because I broke a few things), though I've fixed all those now; some of them were due to a bug no less than four years old (so I'm annoyed because I was initially blamed for things that were done years before I even started working here).

But what currently annoys me is that all of them were communicated to me poorly—in two massive emails with tons of irrelevant conversations cited rather than point-by-point, and all top posted (reply at the top, quoted message responded to at the bottom).

I realise that top posting is used by a huge number of people—probably the majority of non-geeks. It is nonetheless a terrible practice, because one of the following must be true:

  • The quoted message is not needed for context, and its inclusion is redundant, making the email unnecessarily large. Since I don't know if there's anything important below, I have to scroll down to check just in case.
  • The quoted message is needed for context. I have to scroll down to the middle of the email, read the quoted message, then scroll back up to read the reply. If it is a long email, I may have to scroll back and forth to keep track.

Here's what top-posting looks like. Note that when we start reading, we have no idea what is supposed to be a good idea unless we send so little email that the referent for the pronoun that is obvious.

Yes, I think that's a good idea. It makes it easier for me
to write, and I'm lazy.

-----Original Message-----
From: Petter Häggholm [mailto:petter@fake.com] 
Sent: August 27, 2008 7:12 PM
To: Top Poster
Subject: Something

Do you really think that top-posting is a good idea?
Why or why not?

What inline posting looks like:

Petter Häggholm wrote:
<
< Do you really think that top-posting is a good idea?

Of course not.

< Why or why not?

Not only does it force scrolling, it also makes it hard to
respond in a concise, point-by-point manner *inline* with
the message.

Honestly, I don't think top-posting is ever a good thing. The good alternative to bottom-posting (or inline posting) is to reply in such a comprehensive way that the quoted message is not necessary for context, and thus should not be included (if the sender wants to refer to it, he presumably has the power to keep it archived). This would entail writing an email like a regular letter—I do so on occasion, but that's not the nature of technical communication.

Syndicate

RSS Atom

Most Popular Tags