haggholm: (Default)

Mail encryption

If privacy of your email matters to you, you may want to consider using end-to-end encryption. This is different from using a “secure” email provider: With end-to-end encryption, the sender of the email encrypts it before sending, and the receiver decrypts it after receiving it. It's not decrypted during transit—thus, even your email provider cannot read your email.

Without going into mathematics whose details I’d have to review anyway, suffice to say that PGP is a strong system of email security that, with sensible (default!) settings, cannot be effectively broken with modern technology and modern mathematics.¹ (Strong enough, in fact, that various US security agencies tried to suppress it, and when its creator released it for free to the world, he was taken to court and accused of exporting munitions. The case never really went anywhere.)

PGP uses something called asymmetric encryption. Technical details aside, the nifty and amazing thing about it is it’s what’s known as a public key cryptosystem, meaning that I can give you a bit of password (public) key that you can use to encrypt messages for me, but no one², not even you, can decrypt them…except for me, as I retain a special (private) key with the unique power to decrypt. My public key is here, should you wish to send me secure email.

My preferred solution is a plugin called Enigmail for my mail client of choice, Thunderbird.

PGP with Enigmail in Windows

There are some other solutions, none of which I have used.

PGP for webmail (like GMail) in Chrome

There's a browser plugin, currently for Chrome only though Firefox is in the works, called Mailvelope that will transparently encrypt/decrypt your webmail. There's a helpful guide. For now, there’s another plugin for Firefox, but it has received mixed reviews.


If you’re using Linux, this shouldn’t be a problem in the first place. Install your mail client of choice and it surely comes with or has an OpenPGP plugin readily available.


Apparently there's an Outlook plugin. There’s a plugin for OS X Mail.app.


As a reminder, keep in mind that your security is never stronger than your password; and your password is never safer than the least secure place it is stored. If your password is weak (password, your name, your date of birth…) there’s no helping you. If your password is pretty strong (|%G7j>uT^|:_Z5-F), then that’s no help at all if you used it on LinkedIn when they were hacked and their password database stolen, so that any malicious hacker can just download a database of email addresses paired with their passwords.

The solution is to use strong passwords, and to only use each password in one place—if you steal my LinkedIn password you can hack my LinkedIn account, but you can’t use that password to access my bank, email, blog, or any other account. The drawback of strong, single-use passwords is that you’ll never, ever remember them. The counter is to use software to remember them for you.

My preferred solution is a family of software called KeePass for Windows, KeePassX for Linux, KeePassDroid for Android (my smartphone), and so on. This has the primary feature of storing all my passwords in a very strongly encrypted file. I store this file in Dropbox, which lets me share it between my devices (work computer, home computer, laptop, phone). I don’t consider Dropbox fully trustworthy, but that’s OK: if anyone breaks into my Dropbox account, all they’ll get is a very strongly encrypted archive that requires either major breakthroughs or billions of years to break into (or a side-channel attack like reading over my shoulder, literally or electronically; but if they can do that my passwords don’t matter anyway). Thanks to this, I can use very strong passwords (like -?YhS\[q@V4#]F'/L|#*1z)_S".35/#T), uniquely per website or other service. Meanwhile, I only have to remember one password: the password for KeePass itself (which I shouldn’t use for anything else).

The KeePass family of software also tend to come with good password generators, which can generate garbled gobbledygook like the above, and/or include constraints if a given website won’t let you use special characters (e.g. you can tell it to generate a random password 10 characters long with letters and digits but nothing else, which includes at least one letter and one digit). You can also use it to store files, which is a nice way to keep track of things like PGP keys.

¹ It may be broken in the future if usefully sized quantum computers are ever build or if a major mathematical breakthrough is made in prime factorisation—mathematicians have failed to make this breakthrough for some centuries now. If this happens, then I still wouldn’t worry about my personal email: the communications of major world governments, militaries, and corporations will become as vulnerable as mine, and will be a lot more interesting.

² That is to say, there are no direct attacks known to be effective against the ciphers used. There are side channel attacks, which is a fancy way of saying that someone could break your encryption without defeating the mechanism. For example, they could be reading over your shoulder, or they could install malware on your computer that records keystrokes (when you type in passwords), or they could beat you with a $5 wrench until you talk.

haggholm: (Default)

Some people feel that the Emperor’s attempt to turn Luke to the Dark Side in Return of the Jedi is a weak story element, in that there is very little in place to tempt him; that when we have heard of Vader being seduced by the Dark Side of the Force, we should expect something more persuasive. Personally I feel that this misses the point. Certainly and trivially it’s true that “if you go with it, you can win the fight against Vader (your father!) and defeat me, the Emperor (although actually it will give me your victory)” is a rather weak argument. However, that entirely misses the point that it was never about an argument to begin with. Nobody ever said that Vader was persuaded to join the Dark Side. The Emperor was not depicted as a demagogue; his evil was blatant.

Ironically, the point that I think is missed by those who argue that the “seduction” of the Dark Side should have been more persuasive and had a more human element to it is that it was about…well, the Force. It was the Dark Side itself that was supposed to hold an unnatural and harmful attraction—not the course that took you there, not the goals it promised to, or actually did help you reach, but the power in itself: Once tasted, forever will it dominate your destiny. Of course Yoda turned out to be wrong in being quite so absolute, but if we accept him (as the story clearly intends) as wise, then we may infer that his error must ipso facto have been a rare exception.

I think of it more like a metaphysical drug, a psychic super-heroin: Use it once or twice and you’re going to be hooked, not because you enjoy it so much but because it gives you no choice—even if the exposure, as it were, is largely accidental, unwilling, and transient, as Luke’s enraged onslaught at the end of Jedi.

You might argue that this is not clearly stated in the films and that I’m just making it up as a post hoc justification. To this I say—well, maybe, to some degree. Still, regarding (as I do) the original trilogy as the authorative canon, it’s the only interpretation that makes sense to me. You don’t know the power of the Dark Side, Vader intones: I must obey my Master. This was not loyalty, which is still a matter of choice. Rather, the power itself left him no options; he was enslaved to it, using it and under its control.

It also explains the Emperor himself very nicely—twisted and physically distorted, gleefully malicious apparently gratia malice; corrupted, then, by long decades of addiction to the Dark Side. (Need I explicate that I find this a more compelling interpretation than having his face melted by Samuel L. Jackson?)

Speaking of the Emperor, his behaviour lends perhaps the strongest support to my view. I think it is safe to assume that he, within the context of the Star Wars universe, is no fool, and is certainly well versed in the ways of the Force. He has lived with the Dark Side for decades at least, and experienced first-hand whatever effects it has had on him, body and mind. He is, then, pretty well placed to judge its effects on Vader’s mind. Now consider his actions, and his terminal error: He provokes Luke into fighting Vader, apparently expecting Luke’s rage to snare him in the Dark Side. If he did not have good reason to think that this might work, his entire scheme to capture Luke makes no sense. And as he didn’t have anything compelling to tempt or persuade Luke, I submit that he must have expected the intrinsic nature of the Dark Side to do it for him—as his experience had taught him it would.

That’s not the end of it, though, for his behaviour becomes far more foolish on the “psychological view”. The Emperor, seeing Luke caught up in rage, encourages him to kill Vader, and take his father’s place at the Emperor’s side—this while Luke stands over the fallen Vader, who obviously hears every word. Consider this: The Emperor distinctly informs Vader that he’s ready to toss him aside, have him killed, and replace him. But when Luke refuses, and Vader gets back on his feet, the Emperor has no qualms about having Vader by his side again. So: The Emperor demonstrates he’s willing to have Vader killed; Luke refuses to kill his father because he won’t submit and keeps insisting that Vader is not beyond redemption; so the Emperor chooses to torture Vader’s son to death right in front of him, turning his back on this seven-foot cyborg while standing next to a deep shaft into the chasms of the Death Star. Unless the Emperor had strong reason to believe Vader incapable of betraying him, this is beyond foolish: it’s suicide-by-cyborg.

Now, of course it turned out that he was wrong—Luke could and did refuse, and Vader, under these extreme circumstances, proved that though his will had been constrained by the Dark Side of the Force, it had not been utterly subsumed.¹ He was able to make a final choice and redeem himself, though it killed him. (And was this from his injuries, or from the Emperor’s dying lashing-out, or was this because at last he denied himself the addictive substance of the Dark Side on which he had become dependent—and so effectively killed himself?) But unless his mind was constrained, Vader’s killing the Emperor was not a dramatic redemption at all. Of course he might well kill the Emperor, redeemed or not; he had just been betrayed.²

The only way the dramatic climax of the saga becomes a dramatic climax is if you accept that Luke’s resistance was rare, unanticipated, and difficult; and that Vader’s redemption was profound, unprecedented, and hitherto believed impossible by everyone we had met along the way, save Luke only.

Some of you may argue that the psychological view would have given us a better story. Personally I don’t mind a bit of fairy-tale Good versus Evil, so long as it’s not my only fare, but I will not insist that you are wrong. My point is not that a Star Wars version with metaphysical evil is better than a Star Wars version with purely ‘human’ motivations—but rather that judging by the original trilogy, and Jedi in particular, the metaphysical version is the one that was actually made.

You could also argue that if what we saw was a metaphysical conception, it could have been made plainer and it could have been written better. Well, that’s certainly true of all things Star Wars. Nonetheless I disagree with the specific criticism that Jedi is inferior to Revenge of the Sith in that sole aspect of Anakin having a “real temptation” versus the Emperor failing to really tempt Luke with anything. I could even argue that it’s the other way around: By giving Anakin a concrete motivation and casting his apostacy in terms of human motivation, we’re forced to consider the strength and credibility of his sudden turn from “I just want to save my wife” to “alright, I’ll go slaughter some children”, and in terms of human psychology—well, that doesn’t look very plausible to me. Precisely by invoking the mystical corruption of the Dark Side, the original trilogy can at least justifiably ask us to invoke suspension of disbelief.

¹ At this point the Emperor has evidence that the Dark Side was a bit less absolute than he (and everyone else) had hitherto believed, but he didn’t really have time to consider the implications. Seeing Luke resist him, maybe he could have predicted Vader’s redemption, but in any case he didn’t have time.

² It is perhaps a little ironic that the very extreme emotional circumstances at play for Vader are precisely the reason why the “psychological version” would take all the drama out of the climax of the film, by making the choice too easy.

haggholm: (Default)

Ubuntu 11.04 is released, with Unity as the default UI. I decide to try it out on my home desktop—it often serves as a (Windows) gaming computer, so I use it less for work of any kind (i.e. with Linux) than my laptop or my office desktop. Upgrading Ubuntu leaves me with the impression that Unity is actually worse than I thought: It looks like some horrible, developmentally challenged bastard offspring of OS X and a Fisherprice Toy, i.e. rather like an Apple computer except dumber and implemented poorly. Needless to say I am not impressed—it might be more accurate to say that I feel like my computer has been vandalised.

So I experiment with this and that. Gnome Shell on Ubuntu, though it’s not officially supported, and has a lot of hiccups. KDE in both a Fedora Spin and the Kubuntu version—it impresses me as having improved greatly since I last used KDE, but still holds no great appeal, not least because some features are so poorly integrated that the recommended solution is to install Gnome tools instead, notably NetworkManager configuration (go ahead, change your default connection to a static IP using the KDE tools).

I also decide to try Fedora 15, which is still in beta, with Gnome: They allegedly do a good job of releasing a desktop with the standard Gnome 3 Shell, which may or may not annoy me but is at least worth a shot. So I install that and find that the menus and launchers don’t work, but okay—it’s prerelease software and I haven’t updated the packages; of course there will be some issues at first. So I fire up gnome-terminal and yum upgrade and it starts installing hundreds of updated packages and…freezes mid-update. Well, it’s prerelease software and it’s replacing most of Gnome while I’m running Gnome, no biggie: I won’t hold beta crashes against anybody. Annoying to have to hard reboot, but it’s no worse than— Except it is, because on hard reboot, Fedora kernel panics early in the boot process.

Now here’s the interesting part. Not only does it kernel panic—my keyboard doesn’t work. And I mean at all—not in Grub, not during BIOS startup, not at all. I gather what can and probably did happen is that the OS sets a USB mode that the BIOS can’t use, but restores it during shutdown, which latter will of course not happen if you have to hard reboot. This is now a bit of a problem, because with the OS panicking on boot and my keyboard unusable, I can’t access the boot menu to start from CD, my HDD being the primary boot device; nor can I enter the BIOS setup to change boot order. To make things even better, if I disconnect the HDD, my BIOS cleverly decides not to boot from the secondary boot device (which would run a Linux live CD and probably load and restore USB functionality), but to issue an error message about a system disk being missing.

(On a side note, this is not a Linux issue. The same thing can happen if you run e.g. Windows. You just have to be really, really unlucky, regardless of which OS you choose.)

All in all, it has not been a good month for my computer.

I’m borrowing an old PS/2 keyboard from the pile of six such keyboards sitting on a shelf in the office, gathering dust; I doubt anyone will care if it goes missing for a day. With any luck at all, the PS/2 keyboard will work and I can change the boot order, load a live CD, reset the BIOS options, something to fix this damned thing. If I’m unlucky, of course, PS/2 will be disabled and I will have to reset the BIOS via hardware pins, if that’s even possible on my mainboard. Otherwise, presumably, it’s fucked and will need replacement.

I have to say I’m rather unhappy about this entire experience.

haggholm: (Default)

A while back, I wrote a post about 3D movies and what I dislike about them. Note that I focus on the negatives, which does not mean that I think there are no positives—but I’m certainly not generally impressed, mostly because while I think true 3D graphic displays are a wonderful idea, I don’t think the cinema is the place for it. I mentioned problems like the fact that the faux-3D movie makes me expect parallax effects, which do not exist; and that I find the illusion of different focal depths in the image both tiring and irritating.

These problems could, certainly in theory and if not today, then certainly in future practice, be solved for interactive computer simulations—like games—by tracking head movements (to solve the parallax problem: recall this awesome hack) and eye movements, pupil dilations and contractions, to adapt image “fuzziness” to create a truly convincing illusion of focal depth. However, it would not easily lend itself to movies, because it requires adapting the image to the individual viewer, and besides it would be impossible by any technique or technology I know of to even capture the data, except for 3D generated CG.

In a letter to Roger Ebert, veteran editor Walter Murch has described another problem relating to focal depth that I hadn’t thought about:

The biggest problem with 3D, though, is the “convergence/focus” issue. A couple of the other issues – darkness and “smallness” – are at least theoretically solvable. But the deeper problem is that the audience must focus their eyes at the plane of the screen – say it is 80 feet away. This is constant no matter what.

But their eyes must converge at perhaps 10 feet away, then 60 feet, then 120 feet, and so on, depending on what the illusion is. So 3D films require us to focus at one distance and converge at another. And 600 million years of evolution has never presented this problem before. All living things with eyes have always focussed and converged at the same point.

True enough, and as the article goes on to say, this may well account for a good deal of fatigue and headaches that 3D moviegoers experience. I wonder how, even if, this could feasibly be solved by a single-user adaptive system. I suppose if the display used a technology like (very very low-power!) lasers whose angle of striking the eye could be varied depending on focal depth…

haggholm: (Default)

My website has gone through many, many iterations: Major backend changes are vastly more frequent than major content changes. For a while, I wrote some basic framework and generation stuff of my own, as a learning experience; more recently, I ported all the content to Django.

The only part of my website that makes real use of dynamic features is my book list, where I keep track of books I’ve read, and am starting to add features like automatically linking to Amazon, Chapters, Project Gutenberg, WorldCat, and so on; I’m also going to add my own personal ratings and reviews. (All, of course, because it pleases me to do so. I have no visitors to speak of.)

However, that book list—the only part that needed any tools at all beyond a menu preprocessor—was suffering from rather awful performance. The weak link turns out to be Django’s ORM. The list contains close to 700 books, correlated with authors, series (with volume information), languages, translations and translators… Of course, since pretty much all of the information is needed to display the page, it should be possible to fetch pretty much all the data in one query. Unfortunately, that’s not the case.

The page needs to fetch all Book objects. Ideally, the ORM would simply fetch all the related objects belonging to those books in the same query—Person and Language objects and so forth. Django, as far as I can tell, has no way of doing this automatically (by setting options or properties on the objects). It does expose a method to sort of do it, the select_related() method on the QuerySet…but it turns out to have a glaring weakness: It supports only simple foreign key relationships. There appears to be no way at all to invoke select_related() and fetch objects via many-to-many relationships! Since my database is full of those, this becomes a problem: The ORM made individual calls to fetch related data for each of almost 700 objects; a total of thousands of database calls per request—where only one call should be necessary!

Of course, I could easily make a custom query to present the list. However, after running up against the aforementioned obstacle with Django, I decided to have a look at TurboGears. The reason is twofold. First, since it’s the only part of my site that needs a framework at all, it does make sense to pick a framework that works well for that particular page. Second, it’s an excuse to explore a new framework—a learning experience. The latter alone is a sufficient reason to me; together, they were compelling.

Getting set up and porting my application to TurboGears was so trivial that it hardly bears speaking of. There was not much logic for most pages; setting up the URL and controller stuff was trivial. I do like the TurboGears style: Somehow it feels more natural than Django.

I first ported the templates from Django to Genshi; then, finding performance problematic, to Mako. I had some brief initial reservations about Mako. A template language that allows raw Python? My first thought was that it would invite worse practices, in that it allows a developer to put non-presentation code amidst presentation. However, in brief practice it felt helpful in that it allowed me to put some Python logic in the template that could not possibly be expressed in a Django template, but was really, truly not concerned with anything but the specific page’s presentation.

The most significant change is, of course, that TurboGears ships by default with SQLAlchemy, which is my poster child for what an ORM should look like. Using the declarative style, it’s simple to do simple things; but importantly, it allows you to accomplish whatever you damn well please. In particular, pulling in related objects—apparently impossible with the Django ORM—is trivial in SQLAlchemy (pass lazy='joined' to the relationship). Thus, loading my page requires one database query rather than thousands.

All is not rosy, however. I’m not enamoured with the style of Django’s documentation (I sometimes find it hard to locate what I need), but it is very comprehensive—and very accurate. With TurboGears, I’m a lot less impressed. I spent a lot of time following and reading information on Routes only to find, in the end, that support is limited and broken. Fortunately the default URL mapper gets me by, but I’m not entirely happy. Worse, following the documentation to customising the admin views, I have so far been unable to get anything working at all. When I followed the very examples from the docs, nothing happened at all!

At the moment, I have two very similar implementations of my site in Django and in TurboGears. The TurboGears version has a vast edge in performance: Generating the booklist page is about four times faster on initial load, and five times faster on subsequent access, before caching; accessing other pages, too, is much faster. On the other hand, the admin pages are currently not usable, so I have to rely on the Django admin views as they share the same DB. Annoying to be sure, though keeping the Django ORM definition up to date is not really a large maintenance burden to have a very slick admin interface.

I’m not very sure of where I want to go from here. The TurboGears version is faster; I prefer SQLAlchemy; and there are style aspects of TurboGears I prefer to Django. I like Mako in that I can put custom display stuff in defs in templates: In Django, where emphasis is placed on making templates presentation-only, I don’t have defs and so must place display-specific logic in separate files! On the other hand, Django seems a bigger and more reliable project, its documentation is superior—and TurboGears has some annoying glitches, as seen in my failure to get Routes and admin views to work (whether those failures are due to bugs proper, or inadequate/erroneous documentation)—and the deployment, though now safely automated, was just awful, with strange complaints of setuptools versions that were already in place.

I think I prefer TurboGears, but Django’s reliability, ease of deployment, and admin interface make me hesitate.

haggholm: (Default)

In a spectacular display of missing the point, a group of “security researchers” released a Firefox plugin called BlackSheep, designed to combat Firesheep by detecting if it’s in use on a network and, if so, warning the user.

To explain why this is at best pointless and at worst harmful, let’s recapitulate what Firesheep does: By listening to unencrypted traffic on a network (e.g. an unsecured wireless network), it steals authentication cookies and makes it trivial to hijack sessions on social networks like Facebook.

Let’s use an analogy. Suppose some auto makers, Fnord and eHonda and so on, were to release a bunch of remote controls that could be used to unlock their cars and start the engines. Suppose, furthermore, that these remote controls were very poorly designed: Anyone who can listen to the remote control signals can copy them and use them to steal your car. This takes a bit of technical know-how, but it’s not exactly hard, and it means that anyone with a bit of know-how and a bit of malice can hang around the parking lot, wait for you to use your remote, and then steal your car while you’re in the shop.

Now suppose a bunch of guys come along and say Hey, that’s terrible, we need to show people how dangerous this situation is, and they start giving away a device for free that allows anyone to listen to remotes and steal cars. What this device is to Fnord and eHonda remotes is exactly what Firesheep is to Facebook, Twitter, and so forth. What’s important to realise is that the device is not the problem. It does allow the average schmoe, or incompetent prankster, to steal your car (or use your Facebook account), but the very important point is that the car thieves already knew how to do this. Firesheep didn’t create a problem, but by making it trivial for anyone to exploit the problem, they generated a lot of press for the problem.

What the Firesheep guys wanted to accomplish was for Facebook and Twitter and so on to stand up and, in essence, say Whoops, we clearly need to make better remote controls for our cars. (It’s actually much easier for them than for our imaginary auto manufacturers, though.) True, Firesheep does expose users to pranksters who would not otherwise have known how to do this, but the flaw was already trivial to exploit by savvy attackers, which means that people who maliciously wanted to use your account to spam and so forth could already do so.

Now along come the BlackSheep guys and say, Hey, that’s terrible, the Firesheep guys are giving away a remote that lets people steal other people’s cars!, and create a detection device to stop this horrible abuse. But of course that doesn’t address the real point at all, because the real point has nothing to do with using Firesheep maliciously, but to illustrate how easy it was to attack a flawed system.

This is stupid for several reasons:

  1. If BlackSheep gets press, it might create an impression that the problem is solved. It isn’t, of course, firstly because Firesheep wasn’t the problem to begin with, and secondly because BlackSheep only runs in, and protects, Firefox.

  2. People running badly protected websites like Facebook could use BlackSheep as an excuse not to solve the real problem, by pretending that Firesheep was the problem and that problem has been solved.

  3. Even as a stop-gap measure, BlackSheep is a bad solution. The right solution is for Facebook and Twitter and so on to force secure connection. Meanwhile, as a stop-gap solution, the consumer can install plugins like the EFF’s HTTPS Everywhere that forces secure connections even to sites that don’t do it automatically. This is a superior solution: BlackSheep tells you when someone’s eavesdropping on you; HTTPS Everywhere prevents people from eavesdropping in the first place.

    Let me restate this, because I think it’s important: BlackSheep is meaningful only to people with sufficient awareness of the problem to install software to combat it. To such people, it’s an inferior solution. The correct solution is not to ask consumers to do anything, but for service providers (Facebook, Twitter, …) to fix the problem on their end; but if a consumer does anything, BlackSheep shouldn’t be it.

As I write this post, I hope that BlackSheep will get no serious press beyond a mention on Slashdot. It deserves to be forgotten and ignored. In case it isn’t ignored, though, there need to be mentions in the blogosphere of how misguided it seems.

Let’s all hope that Facebook, Twitter, and others get their act together; meanwhile install HTTPS Everywhere.

haggholm: (Default)

Short version: Facebook sucks, Twitter kind of sucks, and if you use Firefox, install this add-on now.

Long version:

A Firefox extension called Firesheep made the news recently in a pretty big way. In a nutshell, it’s a Firefox addon that allows just about anyone to walk into, say, a coffee shop with an open wireless network and capture the cookies¹ authenticating everyone in there with Facebook. Or Twitter. Or…well, any of a great number of sites where you probably don’t want just about anyone to read everything you or your friends have posted, or pose as you.

Of course, Firesheep didn’t create this problem. What it did—and what it was designed to do—was to highlight the problem by creating a tool so easy to use that anyone could do it. Malicious hackers who actually want to steal your credentials and do evil things with your accounts could already do it. The Firesheep people wanted to give the problem more press, and they succeeded. It’s as if all car alarms had an easy way to disable them that the car thieves already knew; Firesheep gave it publicity by selling a simple toolkit to average consumers. (It’s also not, strictly speaking, limited to wireless networks, but open wireless networks are a huge opportunity.)

I won’t go into the technical details in great depth at this point. Suffice to say that you can connect to many servers, including all these big social network, in a secure way. The problem is that it’s not the default. Use a secure SSL connection (you’ll recognise it by the https:// in the URI) and this particular problem goes away.

If you happen to use Firefox, there are actually plugins that you can install that force the browser to use the secure versions of various websites. You can look up ForceTLS on your own time if you are so inclined. I will personally go with HTTPS Everywhere, published by the EFF. Simply install it, and Firefox will connect to Facebook, Twitter, &c. in a way that cannot be eavesdropped upon.

Of course, it has limitations; see the FAQ. Ironically, LiveJournal is an example of a site that doesn’t actually work because it doesn’t really have a secure version.

The real, long-term solution is obvious to anyone who understands the problem and knows anything about authoring websites: No credentials should ever be transmitted over unencrypted channels (except such credentials as are specifically intended for it, like the Diffie-Hellman handshake and what not). If you ever write a webapp with authentication, only allow authentication over SSL, and only send the cookie over SSL. If you’re worried that it’ll be too bandwidth or CPU intensive, by all means serve static content—anything that you explicitly, whitelist-wise know to not matter in terms of security—over plain HTTP, just on a different subdomain or domain or what have you than the one on which you serve the cookies. (There are other reasons, anyway; my own website already serves static content from a separate subdomain because caching rules are very different, and static content can be served up by Lighttpd rather than an Apache instance that runs everything through mod_wsgi and Django.)

Unfortunately, this is not a solution that the client can effect: It’s up to website and service authors. As a user, the best you can do is stick to SSL whenever possible.

¹ The cookies, not the credentials. Even a site with unsecure cookies may transmit passwords securely. Conversely, even if you login through a secure https:// login page, the cookies may be served up for anyone to sniff and steal.

haggholm: (Default)

Today’s xkcd is a good reminder of the dangers of password reuse—and a good time to remind whosoever might stumble across this that while it’s hard for humans to invent and remember different passwords for different services, it’s easy for software to do it. I use SuperGenPass (download here), which lets me type in the same password everywhere but actually use a different one on every site: Google does not have my LiveJournal password; LiveJournal doesn’t have my Facebook password; Facebook doesn’t get to see a password I use anywhere else.

Password Reuse
haggholm: (Default)

There are few things so sure to annoy me as hype. Among those few things, of course, is factual inaccuracy. For both of these reasons, the new phenomenon¹ of 3D movies annoys me.

I will concede that in a narrow, technical sense, these movies are indeed 3D in that they do encode three spatial dimensions—that is, there is some information about depth encoded and presented. However, I don’t think it’s all that good, for various reasons, and would be more inclined to call it, say, about 2.4D.

Our eyes and brains use various cues for depth perception. The obvious ones that leap out at me, if you’ll excuse the pun, are

  1. Stereoscopic vision
  2. Focal depth
  3. Parallax

Let’s go over them with an eye (…) toward what movie makers, and other media producers, do, could do, and cannot do about it.

1. Stereoscopic vision

Odds are very good that you, gentle reader, have two eyes. Because these eyes are not in precisely the same location, they view things at slightly different angles. For objects that are far away, the difference in angle is very small. (Astronomers deal mostly with things at optical infinity, i.e. so far away that the lines of sight are effectively parallel.) For things that are very close, such as your nose, the difference in angle is very great. This is called stereoscopic vision and is heavily exploited by your brain, especially for short-distance depth perception, where your depth perception is both most important and most accurate: Consider that you can stick your hand out just far enough to catch a ball thrown to you, while you surely couldn’t estimate the distance to a ball fifty metres distant to within the few centimetres of precision you need.

“3D” movies, of course, exploit this technique. In fact, I think of these movies not as three-dimensional, but rather as stereoscopic movies. There are two or three ways of making them, and three ways I’m aware of to present them.

To create stereoscopic footage, you can…

  1. …Render computer-generated footage from two angles. If you’re making a computer-generated movie, this would be pretty straightforward.

  2. …Shoot the movie with a special stereoscopic camera with two lenses mimicking the viewer’s eyes, accurately capturing everything from two angles just as the eyes would. These cameras do exist and it is done, but apparently it’s tricky (and the cameras are very expensive). Consider that it’s not as simple as just sticking two cameras together. Their focal depth has to be closely co-ordinated, and for all I know the angle might be subtly adjusted at close focal depths. I believe your eyes do this.

  3. …Shoot the movie in the usual fashion and add depth information in post-processing. This is a terrible idea and is, of course, widely used. What this means is that after all the footage is ready, the editors sit down and decide how far away all the objects on screen are. There’s no way in hell they can get everything right, and of course doing their very very best would take ridiculous amounts of time, so basically they divide a scene into different planes of, say, “objects close up”, “objects 5 metres off“, “objects 10 metres off”, and “background objects”. This is extremely artificial.

All right, so you have your movie with stereoscopic information captured. Now you need to display it to your viewers. There are several ways to do this with various levels of quality and cost effectiveness, as well as different limitations on the number of viewers.

  1. Glasses with different screens for the two eyes. For all I know this may be the oldest method; simply have the viewer or player put on a pair of glasses where each “lens” is really a small LCD monitor, each displaying the proper image for the proper eye. Technically this is pretty good, as the image quality will be as good as you can make a tiny tiny monitor, but everyone has to wear a pair of bulky and very expensive glasses. I’ve seen these for 3D gaming, but obviously it won’t work in movie theatres.

  2. Shutter glasses. Instead of having two screens showing different pictures, have one screen showing different pictures…alternating very quickly. The typical computer monitor has a refresh rate of 60 Hz, meaning that the image changes 60 times every second. Shutter glasses are generally made to work with 120 Hz monitors. The monitor will show a frame of angle A, then a frame of angle B, then A, and so on, so that each angle gets 60 frames per second. The way this works to give you stereoscopic vision is that you wear a pair of special glasses, shutter glasses, which are synchronised with the monitor and successively block out every alternate frame, so that your left eye only sees the A angle and your right eye only sees the B angle. Because the change is so rapid, you do not perceive any flicker. (Consider that movies look smooth, and they only run at 24 frames per second.)

    There’s even a neat trick now in use to support multiplayer games on a single screen. This rapid flicking back and forth could also be used to show completely different scenes, so that two people looking at the same screen would see different images—an alternative to the split-screen games of yore. Of course, if you want this stereoscopic, you need a 240 Hz TV (I don’t know if they exist). And that’s for two players: 60 Hz times number of players, times two if you want stereoscopic vision…

    In any case, this is another neat trick but again requires expensive glasses and display media capable of very rapid changes. OK for computer games if you can persuade gamers to buy 120 Hz displays, not so good for the movie theatre.

  3. The final trick is similar to the previous one: Show two images with one screen. Here, however, we do it at the same time. We still need a way to get different images to different eyes, so we need to block out angle A from the right eye, &c. Here we have the familiar, red/green “3D” glasses, where all the depth information is conveyed in colours that are filtered out, differently for each eye. Modern stereoscopic displays do something similar but, rather than using colour-based filter, instead display the left and right images with different polarisation and use polarised glasses for filtering. This reduces light intensity but does not have the effect of entirely filtering out a specific part of the spectrum from each eye.†

To summarise, there are at least three ways to capture stereoscopic footage and at least three ways to display it. Hollywood alternates between a good and a very bad way of capturing it, and uses the worst (but cheapest) method to display it in theatres.

2. Focal depth

All right, lots of talk but all we’ve discussed is stereoscopic distance. There are other tricks your brain uses to infer distance. One of them is the fact that your eyes can only focus on one distance at a time. If you focus on something a certain distance away, everything at different distances will look blurry. The greater the difference, the blurrier.

In a sense, of course, this is built into the medium. Every movie ever shot with a camera encodes this information, as does every picture shot with a real camera—because cameras have focal depth limitations, too.

The one medium missing this entirely is computer games. In a video game of any sort, the computer cannot render out-of-focus things as blurry because, well, the computer doesn’t know what you are currently focussing on. It would be very annoying to play a first-person shooter and be unable to make out the enemy in front of you because the computer assumes you’re looking at a distant object, or vice versa. Thus, everything is rendered sharply. This is a necessity, but it is a necessary evil because it makes 3D computer graphics very artificial. Everything looks sharp in a way it would not in real life. (The exception is in games with overhead views, like most strategy games: Since everything you see is about equally distant from the camera, it should be equally sharp.)

Personally, however, I have found this effect to be a nuisance in the new “3D” movies. When you add the stereoscopic dimension to the film, I will watch it less as a flat picture and more as though it truly did contain 3D information. However, when (say) watching Avatar, looking at a background object—even though stereoscopic vision informs me that it truly is farther away, because my eyes receive the same angle—does not bring it into focus.

This may be something one simply has to get used to. After all, the same thing is in effect in regular movies, in still photography, and so on.

Still, if I were to dream, I should want a system capable of taking this effect into account. There already exist computers that perform eye-tracking to control cursors and similar. I do not know whether they are fast enough to track eye motion so precisely that out-of-focus blurring would become helpful and authentic rather than a nuisance, but if they aren’t, they surely will be eventually. Build such sensors into shutter glasses and you’re onto something.

Of course, this would be absolutely impossible to implement for any but computer generated media. A movie camera has a focal distance setting just like your eye, stereoscopic or not. Furthermore, even if you made a 3D movie with computer graphics, in order to show it with adaptive focus, it would have to simultaneously track and adapt to every viewer’s eye movements—like a computer game you can’t control, rather than a single visual stream that everyone perceives.

3. Parallax

Parallax refers to the visual effect of nearby objects seeming to move faster than far-away ones. Think of sitting in a car, watching the light poles zoom by impossibly fast, while the trees at the side of the road move slowly, the mountains only over the course of hours, and the moon and stars seem to be entirely fixed. Parallax: Because nearby objects are close to you, your angle to them in relation to the background changes more rapidly.

Of course, in a trivial sense, every animated medium already does capture this; again, it’s not something we need stereoscopic vision for. However, at close distances, a significant source of parallax is your head movement. A movie can provide a 3D illusion without taking this into account…so long as you sit perfectly still, never moving your head while a close-up is on the screen.

As with focal depths, of course, this is viewer-dependent and completely impossible to implement in a movie theatre. However, it should be eminently feasible on home computers and game systems; indeed, someone has implemented headtracking with a Wii remote—a far more impressive emulation of true three-dimensionality than any amount of stereoscopic vision, if you ask me.

Combined with eye tracking to monitor focal depth, this would be amazing. Add stereoscopic images and you’d have a perfect trifecta—I honestly think that would be the least important part, but also the easiest (the technology is already commercially available and widespread), so it would be sort of silly not to add it.


After watching a “3D” movie or two, I have come away annoyed because I felt that the stereoscopic effect detracted rather than added. Some of this is doubtless because, being who I am, the hyped-up claim that it truly shows three dimensions properly² annoys me. Some of it, however, is a sort of uncanny valley effect. Since stereoscopic vision tantalise my brain into attempting to regard these movies as three-dimensional, it’s a big turn-off to find that there are several depth-perception effects that they don’t mimic at all. If a movie is not stereoscopic, my brain does not seem to go looking for those cues because there’s no hint at all that they will be present.

Of course, it may just be that I need to get used to it. After all, “2D” movies³ already contain depth clues ([limited] parallax, [fixed] focal depth differences) without triggering any tendency to go looking for more. I haven’t watched a lot of stereoscopic imagery, and perhaps my brain will eventually learn to treat them as images-with-another-feature. For now, however, adding stereoscopic information to productions that can’t actually provide the full 3D visual experience seems to me rather like serving up cupcakes with plastic icing: It may technically be closer to a real cupcake than no icing at all, but I prefer a real muffin to a fake cupcake.

¹ It’s at least new in that only now are they widely shot and distributed.

² Technically all movies do depict three dimensions properly, but these new ones are really looking to add the fourth dimension of depth to the already-working height, width, and time.

³ Which are really 3D; see above.

† This should not have needed to be pointed out to me, as I have worn the damned polarised things, but originally I completely forgot them and wrote this as though we still relied on red/green glasses. Thanks to [livejournal.com profile] chutzman for the correction.

haggholm: (Default)

So, I moved to a new apartment last Friday, July 16, across a staggering distance of three blocks. Naturally, I want my DSL service to move with me, and I’m a big fan of TekSavvy. However, dealing with TekSavvy qua third-tier ISP does have the disadvantage of involving one of the big telcos (Telus in BC, Bell in Québec, …). Past experiences have not been pleasant.

In brief, Telus owns the physical lines and switches. TekSavvy are responsible for my internet service, but Telus is responsible for setting up a connection so that my DSL modem can physically communicate with their gateway and DNS servers. Thus, whenever I ask TekSavvy to do anything that involves such low-level services (e.g. turning service off at one address, or turning it on somewhere), they can’t do it themselves—they have to place a work order with Telus.

I contacted TekSavvy fairly close to the move, as a lot was happening pretty quickly, and I was prepared to be cited a date rather late in the month. I was pleasantly surprised when the rep told me that they could probably have it activated as early as the 16th! —This was not to be, of course.

A few days later, they got back to me…and here was my first-ever poor experience with TekSavvy support. The email I received didn’t say what was wrong, but only that there was a problem with my DSL order and I had to call them. I did, and the woman I spoke to was, to put it mildly, not up to the very high standards I am used to with TekSavvy support. She had no idea what was going on, and started off by asking for all my address details (which they already had down correctly), then (after putting me on hold) told me that apparently DSL was not available at my new address. I wanted to know what was going on, and said so, and after many hesitations and stammerings and ultimately being put on hold thrice, it turned out that all that was really wrong was that Telus had moved my activation date to Tuesday, July 20. Oh well: This was to be expected; the 16th always did sound too good to be true. But I should have been told that right away in the email, or at least straightaway on the phone—rather than being on the phone for half an hour, on hold thrice, and on the verge of cancelling my service! (Remember, she told me that it was not available at my new address. I came dangerously close to switching to another ISP.)

Come the 20th, I get home after three hours of jiu-jitsu and sit down to check the status of my internet connection, which turns out to be none at all; I have no access and my modem finds nary a trace of any DSL access. I sigh and call TekSavvy again. There’s a bit of a wait, but when I finally do get to talk to someone it’s more what I’m used to with TekSavvy—a friendly, confident and (yes) tech savvy guy who knows what’s going on and can talk to me with a sense of humour and an attitude as though I, a customer, am smart enough to actually communicate with.

It turns out that Telus has in fact changed the activation date again. This time (heaven knows why) they opted to call me directly rather than have TekSavvy do so. This was unfortunate. The note on my account said that Telus tried to call me, but it seemed as though my phone was off and they were (it seems) unable to leave so much as a voice mail. (My phone was not off. My phone is never turned off.) Having thus tried once and miraculously failed to contact me in any way whatsoever, Telus did the reasonable thing and ignored the situation, thus leaving me unaware that they had rescheduled my activation.

I am of course sort of puzzled that this activation is such a big deal—why is this not a nigh-automatic process? They have all my account details in their systems; a computer should surely be able to do this work for them. I am also somewhat surprised that Telus were unable to contact me. They provide my mobile service. When my phone company are unable to figure out how to reach me by phone, I am mildly troubled.

Here’s hoping that they actually turn the damned thing on tomorrow.

haggholm: (Default)

A horrible bug was causing trouble for our clients. Yesterday, I hunted it down and fixed it (as I thought), which involved changes to both PHP code and Javascript. Today, it seems that the bug is still occurring… Naturally, I am unable to reproduce it.

Hypothesis: The bug fix I created works. If so, any client with the latest code properly deployed should not experience this problem. However, since some of the changes were made to Javascript, I do not and cannot know whether all the fix was “properly” deployed: Some of the users may have stale versions of the Javascript files in their browser (or proxy) caches, and I can’t detect, let alone fix this problem.

Hack: Rename the Javascript file. This causes no problems (unlike CVS, subversion will after all keep track of the change history across renames), and clients are forced to reload it (they cannot use a stale, cached copy when the request is ostentatiously for a different resource).

End result: Who knows? Either my fix was good and this should clean up stale caches; or I was wrong, the fix didn’t address all cases of the problem in the first place. I have no way of knowing until and unless I receive more automatic error reports. Hopefully I won’t at all, which means I’ll never be quite sure that it’s gone.

I hate bugs I cannot reproduce.

haggholm: (Default)

I briefly borrowed an iPhone to see how the damned thing worked. The onscreen keyboard annoyed the hell out of me, but I expect I would get used to that. (No haptic feedback means I can’t type without looking, but hitting the right buttons ought to get easier.) Other than that, it seemed to be about exactly as good as I expected…which is unfortunate, because part of me was really hoping there’d be something to wow and surprise me. But no… The email setup was painless, and IMAP worked just fine. LDAP worked…after a fashion. Support seems pretty rudimentary. I suppose this phone really does do what I need in a phone. I just wish there were a gadget out there good enough that I could get excited about it.

I really ought to get some new phone, though. My current phone has started to freeze and crash when I do certain things, such as search Google Maps or attempt to save a new phone number, both of which are rather important functions. And, of course, its lack of calendar sync means that I am effectively without a calendar (I need something with Google Calendar access, alas). And if I do get a new phone in the near future, given the current line of phones available, and given what’s available in Canada, the iPhone seems the only option.


haggholm: (Default)

It’s been eight months since the last time I thought about getting a new phone, and I have not grown any more fond of my existing phone in the interim. The reason why I haven’t bought a new one is that when I last looked, exactly no phone at all met my criteria.

  • Must handle generic IMAP and handle it well
  • Must have a GPS (because Google Maps + GPS is too good a combination to forego)
  • Must be able to intelligently access my Google calendar

Some very major nice-to-haves:

  • Able to update/synchronise my Google calendar
  • Able to utilise an LDAP address book for emailing; ideally: Synchronise contacts
  • Able to synchronise contact info with a Linux computer (not so important if there’s a good LDAP solution)

The reason why I ended up not buying a new phone was that none of them really met my criteria. In particular, the phones available at the time (the iPhone version whatever, and the Android G1) both had a reputation for poor IMAP support, and there was official support for LDAP on exactly no devices. Now it seems that the more recent iterations of the iPhone OS do come with LDAP support. The notion of being able to maintain a single phone-and-address book (which I can, in addition, easily back up: in fact, I do so regularly with a cron job) is appealing. The notion of using a single Google calendar instead of separate calendars on my phone and my computer (which leads me to check both less frequently since neither is all that helpful) is equally so.

I’d rather go with an Android device since I prefer the openness and I hate to be associated with Apple fanboys even by coincidence, but it does seem like the iPhone does more to meet my requirements right now than does any Android platform on the market.

I guess all it comes down to at this point is, one: How good is that LDAP support? —and two: How nicely does the iPhone interact with vanilla IMAP accounts?

haggholm: (Default)

Having adopted SpiderOak to manage files that are too large to conveniently manage in my version controlled home directory, as well as files with data sensitive enough that I’d rather not put it on the same server as my webapps, the time has come to finally get off my arse and clean up that mess.

While there were files that were always clearly too large to commit to my repository and were only ever backed up remotely once I adopted SpiderOak as a solution, there were also files that I would now clearly backup with SpiderOak, but which I then committed to version control. There are also some files sufficiently sensitive that I should not have committed them, but I did… Time at last to delete them.

Of course, deleting files from most version control systems is non-trivial. My files sit in Subversion. To delete them, I have to dump the entire repository into a dump file, run this file through a filter to exclude specified paths, create a new repository, and import the filtered dump into this. A bit cumbersome, but ah well—at least it works. And it seems that, preliminarily, I am able to shrink the repository size by at least 54% 71% 78%.¹

While I’m messing around with this stuff, I just might switch to Mercurial. Hmm…

¹ From an original, hideous 2.4G to a somewhat less awful 1.1G 710M 521M. The Hg repository weighs in larger at 1.6G 965M 743M, for some weird reason (still about 33% 60% 69% smaller than the original subversion repository). —All as measured by `du -sch`.

haggholm: (Default)

Good thing: OpenSSH 4.9+ offers Match and ChrootDirectory directives which can filter users by group and chroot-jail them to their home directories.

Weird thing: ChrootDirectory requires that the full path up to and including the home directory be writable by root only. This means that the users must not have write permissions on their own home directories. As far as I can tell, I can only make this useful by creating user-writable subdirectories inside. (This works fine for our purposes, but is, well, sort of bizarre: Home directories that the users cannot write to!)

Bloody annoying thing: RHEL 5 comes with, I believe, OpenSSH 4.3. Versions with ChrootDirectory have been around for years, but naturally RHEL is a few more years behind, so I have to create my own package to get a chroot capable SSH setup. It’s not hard, but it is annoying and adds a maintenance burden.

haggholm: (Default)

Starting to get tired of wrestling with the recurrent bugs in my homebrewn framework that prematurely terminate sessions. I’m not too concerned about them since they do not threaten security—I wrote it in a rather paranoid fashion, and all session or authentication related bugs since the first week or so have had to do with premature termination rather than excessive permissiveness—but they do annoy me.

Perhaps it’s time I refactored my website, and maybe a webapp or two, to use Django. I’m sure I could fix these bugs with the help of improved logging, but is it worth the effort? Beyond the “just for fun” reason, I wrote my framework in order to learn about framework development, and to get an inside understanding of session management and security concerns like proper password management, authentication, CSRF protection, and so forth. I did not write it with either a belief or an intention that I would write my own production-worthy system to rival a major project like Django.

I’ve learned a lot of lessons¹, and written some decent code², but if I want to keep working on this framework, I will have to start to Do Things Properly—add unit tests, track down these pesky session termination bugs, and so forth. I consider unit tests, proper logging, and so forth essential to production code, but not to fun exploration projects. I’m rather beginning to think that my framework has reached the point where it should either be made serious, or phased out. The latter sounds more sensible, and less symptomatic of NIH.

Besides, it can’t hurt to learn Django, can it?

¹ Apart from learning about CSRF protection, the most interesting problem I got to solve was probably SQLObjectInherit, which provides (in SQLAlchemy language) single table inheritance for SQLObject, using decorators. It’s not perfect (there are some edge cases where you request an object from some class C and expect a subclass D, but erroneously get the parent class), and I was contemplating switching to SQLAlchemy largely for this reason. It’s also the one thing that makes me question the Django decision, but the lack of single table inheritance is probably a smaller deal, in the long run, than all the myriad problems it solves.

² Decent for a non-unit tested system, which is very different from a production ready or production worthy system.

haggholm: (Default)

The motivation

Unhappy with my cable connection’s performance at various times, I looked around for a better ISP. I decided to go with TekSavvy, for a couple of reasons:

  1. They offer a service I like at a good price. For the same price as I pay for my 7.5M down/512k up, I get 6M down/1M up with TekSavvy. Note that speed can be deceptive: Shaw is cable, and cable is shared with other nearby cable subscribers and tends to slow down during peak usage hours. The DSL line is all mine. And the upload speed is hugely greater, which is important to me—I never saturate a multi-megabit downlink, but when I backup large amounts of data or push data to my webserver, poor uplink speed is sometimes frustrating.

    Both my old Shaw service and my new TekSavvy service impose monthly total bandwidth caps. Shaw: 60 GB. TekSavvy: 200 GB. (In each case, GB, not Gb.)

  2. TekSavvy does not throttle your traffic. In fact, they are actively lobbying advocates for net neutrality and so forth. This is of course in their own best interest as a lower-tier ISP (they rent lines from companies like Bell and Telus, and don’t want the line providers to throttle their traffic), but the fact remains that by buying service from TekSavvy, I am paying people who advocate for the openness and regulations I want.

    Shaw, of course, throttles traffic like torrents, and throttles small companies and consumer interests whenever they can.

  3. I’ve used them before (in Québec), and left with an impression of good value for my money.

The service

I just got my DSL service, so it’s too early to comment on stuff like reliability (obviously I expect no problems—if I did, I wouldn’t have switched). The one minor thing worth remarking on is that as far as CNET’s bandwidth tester is concerned, my 6M DSL service is actually slightly (8.6%) faster than my 7.5M cable service—so don’t let that number scare you.

While I haven’t really tested this, so that it is subjective, I am surprised to find that my browsing feels much, much snappier than on the Shaw connection. I would guess that while the bandwidth is similar, my latency is much better.

The misadventures and the support

It is crucial to understand that TekSavvy is an internet service provider. They provide internet access. What they do not do is own, install, or manage the actual lines coming into anybody’s home. The lines are owned by the big telcos, like Bell or (as in my case) Telus. Any work on line installation is done by Telus agents. If you live in Vancouver (else substitute your local line provider as appropriate) sign up with a lower-tier ISP like TekSavvy, and you need DSL activated in your home, they will send a work order to Telus. Telus technicians activate it, Telus technicians may (or may not) visit your home.

It is perhaps interesting to note at this point that Telus also offers DSL service, so when they provide line service to TekSavvy customers, they are doing work for their direct competitor.

With the preliminaries out of the way, here’s a timeline:

  • November 4: I place an order with TekSavvy. I receive a confirmation email telling me that my up to 6. Megabit dry copper loop residential DSL service is set to be tentatively activated on 11/11/2009 anywhere between the hours of 8am to Midnight, though warning me that my activation date could be delayed up to 10 business days due to unanticipated circumstances.

  • November 5: I receive a phone call from TekSavvy. Because Telus has a worker shortage in BC, they—Telus—have delayed my scheduled activation date until December 7.

  • November 6: I receive a phone call from TekSavvy. They misread something and the proper rescheduling date is December 1, not December 7.

  • December 1: I work from home because Telus may need access to my apartment to test things. I don’t like to work from home as I am less productive, but do it—it’s just this once, after all. I see neither hide nor hair of any Telus technician, nor do I hear from them.

  • December 3: I receive a phone call from Telus informing me that my DSL service has been activated. No explanation is given as to why it was delayed past the (already postponed) scheduled date.

  • December 4: I spend a lot of time on the phone with TekSavvy support. My ADSL router came with some weird factory settings, so it takes some resetting and configuring before the real problem emerges: TekSavvy and Telus use different DSL authentication mechanisms (or some similar DSL jargon), and the Telus techs configured my line for Telus DSL service, not TekSavvy. TekSavvy can do nothing but file a ticket with Telus, and since Telus support works Monday through Friday, 8 am to 8 pm (unlike TekSavvy’s 24/7), I will have to wait until after the weekend to see any change.

    Note: Router settings are not TekSavvy’s fault, since I bought my own DSL modem/router rather than getting one from them. All the same, they did help me troubleshoot it, even though it was not one of their recommended/official models, so point to them.

  • December 7: Wonder of wonders! The DSL connection works! Unfortunately, the line rate is not what I’m paying for, just 1536 kbps down/637 up. (Note: Line rate, not measured connection rate.) It should be about 6000/1000. I call TekSavvy support again. I’m told that this is another configuration error by the Telus techs—one they make pretty often. Once again, it’s not within the power of an ISP to fix this; has to be done by the company running the lines. Another ticket to Telus.

  • December 8: It works! I actually have the line rate I’m paying for!

Obviously, this was all a bit of a pain in the arse.

Note that apart from a minor error (a misreading that affected nothing and was quickly corrected), it appears that all of this was the fault not of TekSavvy, but of Telus. I gather that my case was atypically difficult by quite a wide margin, but I can’t help but wonder if Telus are just not very interested in making things easy on customers of their competitors, regulations be damned…

On the other hand, TekSavvy support was wonderful. When I called for help (which, as mentioned, is available 24/7),

  1. I spoke to competent technicians. TekSavvy does not bother with a call centre full of support staff reading from scripts. Their tech support consists of people who actually understand technical issues.

  2. I was not feel condescended or talked down to. This is a huge deal to me, as I am very sensitive to it, and as my technical expertise makes a lot of the support scripts many companies use sound like baby talk. The TekSavvy guys, of course, stepped me through the bits I was clueless about, but passed briefly over the parts it was clear I knew. This requires respect for the customer’s intelligence when any is exhibited, and it requires competence (a support monkey reading a script but lacking actual technical expertise can’t possibly judge mine, and won’t know how to address problems in a manner not dictated by the script).

    This alone would make me happy to recommend TekSavvy. I usually find tech support calls a frustrating and sometimes enraging experience. Here, in spite of the annoying frequency of issues caused by Telus, it was more like friendly banter with people I could relate to.

  3. They called to follow up on every issue. They didn’t have the power to directly address a lot of the problems, as they had to go through Telus, but they did make sure that I was kept in the loop as things progressed.


Based on my experiences thus far, I’m happy to recommend TekSavvy. The service itself seems good, the tech support is excellent, and the company supports openness and net neutrality rather than consumer-crushing and unethical business practices. I’m happy to give my money to these guys.

I had a huge delay and numerous issues in getting the service working, but as far as I can tell, all of them were due to Telus, and I wouldn’t want TekSavvy to suffer for that. And, of course, odds are pretty good that you won’t be hit with every Telus fuckup in the book, like I was.

The only remaining items on my ISP agenda is to buy a longer phone cord so I can move the modem to where I want it, and to cancel my Shaw subscription.

haggholm: (Default)

Your hardware has changed significantly since first install, it informs me, so you have to re-activate Windows. Would you like to do so now?

Sure, I installed some new devices…but they were virtual devices.

haggholm: (Default)

I just bought a new, larger hard drive, and today I installed it in my desktop computer. I bought this computer from NCIX and, in a moment of pure indulgent laziness, paid them to assemble it for me rather than assembling it myself. Today I had to open it and move things around—and oh, but my earlier laziness came back to bite me in the ass.

The case has two 3.5" drive cages. In spite of the case manual’s suggestion that one use the lower cage “for optimal cooling and noise reduction” (or something to that effect), both pre-installed drives were in the upper cage, which sits directly in front of the video card. By “directly” I mean that they were so close that the power cord of the lower drive was physically touching the card. By “physically touching” I mean that it was, in fact, blocked by the card, so that I had to remove the video card to unplug the drive. To remove the video card, I had to unplug the system power cord. …And so on.

And of course all the cords were zip-tied together so tightly that the drive cage could not be removed without unplugging the drives, and the lower cage could not be reached without cutting numerous zip ties. And no power connectors were left for expansions, so I had to dig through boxes to find spares; ditto SATA connectors. As a bonus, the upper and lower drive cages use different attachment systems (the upper cage has drive bays, the lower does not), and the necessary screws were of an unusual type, so I had to find those too (this one isn’t the installing tech’s fault, though).

I have never spent so much time just physically installing a hard drive, but on the bright side, I expect that moving all the drives to the lower bay will significantly improve system cooling (since the hard drives were between the front air intake and the video card, sigh), and the case could use the cleaning it got; it was a mite dusty, if you’ll pardon the pun.

Now, of course, grub reports an error, presumably because the drive order has changed, or something (the BIOS setup correctly reports all three HDDs). I don’t know, and I lack the energy to work at it tonight. Hopefully tomorrow night will be a quick fix to get the system running rather than something horribly wrong.

haggholm: (Default)

If you loathe the Adobe Acrobat reader half as much as I do, you might be happy to learn that Evince, the standard PDF reader for the GNOME platform, now has a Windows version (get it here). I have not used this Windows version myself, but expect good things. (This latest version of Evince also added support for the one feature I was missing: Displaying annotations.)

Evince is what made me stop hating PDF documents—it does nothing fancy, but displays PDF (and Postscript) documents cleanly, quickly and efficiently. Searching for text in a document resembles, well, searching for text in a text document rather than asking your computer to reindex all its documents while attempting to compute a cure for all cancers, or whatever Adobe make their reader do to slow it down to the startling degree I have come to expect. (If—if—this sounds like an exaggeration, it’s because (1) the Adobe reader for Linux is even worse than the Windows version, and/or (2) they have improved the Windows version since I last used it, reversing a long-standing tradition of adding more and more features that nobody uses except your CPU.)

More seriously and less sarcastically, Evince was the first application that really struck me with a “less is more” sort of beauty—an object lesson in UI design, if you will. It’s there to do one thing: Let me view PDF and Postscript files. It has almost no buttons, options, switches, or fiddly bits. And yet, in its stark simplicity, it was so vastly superior to the obvious alternative that it made me view PDFs as a good format for portable documents rather than a plague upon the internet.


RSS Atom

Most Popular Tags