haggholm: (Default)

Mail encryption

If privacy of your email matters to you, you may want to consider using end-to-end encryption. This is different from using a “secure” email provider: With end-to-end encryption, the sender of the email encrypts it before sending, and the receiver decrypts it after receiving it. It's not decrypted during transit—thus, even your email provider cannot read your email.

Without going into mathematics whose details I’d have to review anyway, suffice to say that PGP is a strong system of email security that, with sensible (default!) settings, cannot be effectively broken with modern technology and modern mathematics.¹ (Strong enough, in fact, that various US security agencies tried to suppress it, and when its creator released it for free to the world, he was taken to court and accused of exporting munitions. The case never really went anywhere.)

PGP uses something called asymmetric encryption. Technical details aside, the nifty and amazing thing about it is it’s what’s known as a public key cryptosystem, meaning that I can give you a bit of password (public) key that you can use to encrypt messages for me, but no one², not even you, can decrypt them…except for me, as I retain a special (private) key with the unique power to decrypt. My public key is here, should you wish to send me secure email.

My preferred solution is a plugin called Enigmail for my mail client of choice, Thunderbird.

PGP with Enigmail in Windows

There are some other solutions, none of which I have used.

PGP for webmail (like GMail) in Chrome

There's a browser plugin, currently for Chrome only though Firefox is in the works, called Mailvelope that will transparently encrypt/decrypt your webmail. There's a helpful guide. For now, there’s another plugin for Firefox, but it has received mixed reviews.


If you’re using Linux, this shouldn’t be a problem in the first place. Install your mail client of choice and it surely comes with or has an OpenPGP plugin readily available.


Apparently there's an Outlook plugin. There’s a plugin for OS X Mail.app.


As a reminder, keep in mind that your security is never stronger than your password; and your password is never safer than the least secure place it is stored. If your password is weak (password, your name, your date of birth…) there’s no helping you. If your password is pretty strong (|%G7j>uT^|:_Z5-F), then that’s no help at all if you used it on LinkedIn when they were hacked and their password database stolen, so that any malicious hacker can just download a database of email addresses paired with their passwords.

The solution is to use strong passwords, and to only use each password in one place—if you steal my LinkedIn password you can hack my LinkedIn account, but you can’t use that password to access my bank, email, blog, or any other account. The drawback of strong, single-use passwords is that you’ll never, ever remember them. The counter is to use software to remember them for you.

My preferred solution is a family of software called KeePass for Windows, KeePassX for Linux, KeePassDroid for Android (my smartphone), and so on. This has the primary feature of storing all my passwords in a very strongly encrypted file. I store this file in Dropbox, which lets me share it between my devices (work computer, home computer, laptop, phone). I don’t consider Dropbox fully trustworthy, but that’s OK: if anyone breaks into my Dropbox account, all they’ll get is a very strongly encrypted archive that requires either major breakthroughs or billions of years to break into (or a side-channel attack like reading over my shoulder, literally or electronically; but if they can do that my passwords don’t matter anyway). Thanks to this, I can use very strong passwords (like -?YhS\[q@V4#]F'/L|#*1z)_S".35/#T), uniquely per website or other service. Meanwhile, I only have to remember one password: the password for KeePass itself (which I shouldn’t use for anything else).

The KeePass family of software also tend to come with good password generators, which can generate garbled gobbledygook like the above, and/or include constraints if a given website won’t let you use special characters (e.g. you can tell it to generate a random password 10 characters long with letters and digits but nothing else, which includes at least one letter and one digit). You can also use it to store files, which is a nice way to keep track of things like PGP keys.

¹ It may be broken in the future if usefully sized quantum computers are ever build or if a major mathematical breakthrough is made in prime factorisation—mathematicians have failed to make this breakthrough for some centuries now. If this happens, then I still wouldn’t worry about my personal email: the communications of major world governments, militaries, and corporations will become as vulnerable as mine, and will be a lot more interesting.

² That is to say, there are no direct attacks known to be effective against the ciphers used. There are side channel attacks, which is a fancy way of saying that someone could break your encryption without defeating the mechanism. For example, they could be reading over your shoulder, or they could install malware on your computer that records keystrokes (when you type in passwords), or they could beat you with a $5 wrench until you talk.

haggholm: (Default)

Recently, Valve announced that they are working on a Linux version of the Steam client, along with a port of Left 4 Dead 2. Rumours of plans for a Linux version of Steam have floated around since roughly the dawn of time, but now it is official; now it is real.

Reactions are mixed, from gung-ho enthusiasm to RMS-style caution. Personally, I am enthused. This is partly because I am not a free software purist, and partly because I regard this as a win-win scenario.

Cards on the table: I run Steam, and I own games on Steam. Their DRM is not a deal-killer for me. That said, I prefer DRM free software, and I would rather buy games via Good Old Games, who are entirely DRM free. (If a game is available via both services, there is no contest: GOG every time.)

However, I think that this move can only be good for Linux. Even if you never run a Steam game in your life, this is a good thing. One possibility is of course that this effort of Valve's fails, in which case nothing really changes. But consider what happens if they are at all successful:

  1. Currently, there is no significant market for games on Linux, because gamers all run Windows (or, I suppose, OS X); and gamers all run Windows (…) because there are no games for Linux. It's a chicken-and-egg situation; there's no supply because there's no demand, but there can be no demand because there's no supply to demand from. Launch a few AAA titles on Linux and suddenly there will exist a games market. It may thrive or it may fail to thrive, but this kind of effort gives it a real fighting chance.

  2. Games are important. Gaming is a big piece of what computers are used for, and probably the only piece where the average consumer currently has any reason at all to go with Windows over a user-friendly Linux distribution. Having a games market will be good for Linux adoption.

  3. It will be good for indie developers. Even if, initially, only a few AAA games are ported, and only a few AAA developers care about the Linux games market, Steam remains a powerhouse delivery vehicle for games, now to a potentially new market. This should leave a lot of room for indie developers to exploit this new space in a way that cannot currently be done without a good way of reaching consumers.

    As a bonus, I expect indie developers are much better situated to port software, because they don't have massive, hard-to-port codebases to deal with (because indie games are smaller), and because they don't have bleeding-edge graphics and so don't need to worry quite so much about performance; thus a penalty from using a less-efficient cross-platform library, or a performance hit from a less-than-perfect port, is more affordable.

    And I gather that the Humble Linux Bundle proved that Linux users are quite willing to support indies, whose ethos more easily aligns with OSS mentalities than AAA corporations.

    So I regard this as Valve breaking the ice with Steam and an AAA title, whereupon indies will have the powerhouse delivery vehicle of Steam, along with a few larger players, to expand the market. Hopefully more big names will follow.

  4. As this market grows, so will the demand, now backed by real money, for better and better video drivers. The Valve team have already collaborated with Intel to improve OpenGL performance. And big players like Valve are well placed to put some pressure even on giants like NVIDIA and AMD if and when there are problems.

  5. Even if you hate DRM with a burning passion… If a games market is once established in the Linux world, there will be more room for niche players (like Good Old Games) to edge in. It may sound paradoxical to suggest that Valve moving into a virtual monopoly with Steam would improve GOG's position, but I think it may be so. Right now, there's no reason for GOG to target Linux because there is no market, and there are very few games. If a cultural shift of any statistically significant magnitude occurs, then there will be a market (ergo consumers to target), and game developers will be more motivated (and better equipped) to produce Linux versions of games.

We'll see how the ports actually work out, but I for one wish Valve the best of luck and regard the whole thing as a positive development.

haggholm: (Default)

Ubuntu 11.04 is released, with Unity as the default UI. I decide to try it out on my home desktop—it often serves as a (Windows) gaming computer, so I use it less for work of any kind (i.e. with Linux) than my laptop or my office desktop. Upgrading Ubuntu leaves me with the impression that Unity is actually worse than I thought: It looks like some horrible, developmentally challenged bastard offspring of OS X and a Fisherprice Toy, i.e. rather like an Apple computer except dumber and implemented poorly. Needless to say I am not impressed—it might be more accurate to say that I feel like my computer has been vandalised.

So I experiment with this and that. Gnome Shell on Ubuntu, though it’s not officially supported, and has a lot of hiccups. KDE in both a Fedora Spin and the Kubuntu version—it impresses me as having improved greatly since I last used KDE, but still holds no great appeal, not least because some features are so poorly integrated that the recommended solution is to install Gnome tools instead, notably NetworkManager configuration (go ahead, change your default connection to a static IP using the KDE tools).

I also decide to try Fedora 15, which is still in beta, with Gnome: They allegedly do a good job of releasing a desktop with the standard Gnome 3 Shell, which may or may not annoy me but is at least worth a shot. So I install that and find that the menus and launchers don’t work, but okay—it’s prerelease software and I haven’t updated the packages; of course there will be some issues at first. So I fire up gnome-terminal and yum upgrade and it starts installing hundreds of updated packages and…freezes mid-update. Well, it’s prerelease software and it’s replacing most of Gnome while I’m running Gnome, no biggie: I won’t hold beta crashes against anybody. Annoying to have to hard reboot, but it’s no worse than— Except it is, because on hard reboot, Fedora kernel panics early in the boot process.

Now here’s the interesting part. Not only does it kernel panic—my keyboard doesn’t work. And I mean at all—not in Grub, not during BIOS startup, not at all. I gather what can and probably did happen is that the OS sets a USB mode that the BIOS can’t use, but restores it during shutdown, which latter will of course not happen if you have to hard reboot. This is now a bit of a problem, because with the OS panicking on boot and my keyboard unusable, I can’t access the boot menu to start from CD, my HDD being the primary boot device; nor can I enter the BIOS setup to change boot order. To make things even better, if I disconnect the HDD, my BIOS cleverly decides not to boot from the secondary boot device (which would run a Linux live CD and probably load and restore USB functionality), but to issue an error message about a system disk being missing.

(On a side note, this is not a Linux issue. The same thing can happen if you run e.g. Windows. You just have to be really, really unlucky, regardless of which OS you choose.)

All in all, it has not been a good month for my computer.

I’m borrowing an old PS/2 keyboard from the pile of six such keyboards sitting on a shelf in the office, gathering dust; I doubt anyone will care if it goes missing for a day. With any luck at all, the PS/2 keyboard will work and I can change the boot order, load a live CD, reset the BIOS options, something to fix this damned thing. If I’m unlucky, of course, PS/2 will be disabled and I will have to reset the BIOS via hardware pins, if that’s even possible on my mainboard. Otherwise, presumably, it’s fucked and will need replacement.

I have to say I’m rather unhappy about this entire experience.

haggholm: (Default)

In a spectacular display of missing the point, a group of “security researchers” released a Firefox plugin called BlackSheep, designed to combat Firesheep by detecting if it’s in use on a network and, if so, warning the user.

To explain why this is at best pointless and at worst harmful, let’s recapitulate what Firesheep does: By listening to unencrypted traffic on a network (e.g. an unsecured wireless network), it steals authentication cookies and makes it trivial to hijack sessions on social networks like Facebook.

Let’s use an analogy. Suppose some auto makers, Fnord and eHonda and so on, were to release a bunch of remote controls that could be used to unlock their cars and start the engines. Suppose, furthermore, that these remote controls were very poorly designed: Anyone who can listen to the remote control signals can copy them and use them to steal your car. This takes a bit of technical know-how, but it’s not exactly hard, and it means that anyone with a bit of know-how and a bit of malice can hang around the parking lot, wait for you to use your remote, and then steal your car while you’re in the shop.

Now suppose a bunch of guys come along and say Hey, that’s terrible, we need to show people how dangerous this situation is, and they start giving away a device for free that allows anyone to listen to remotes and steal cars. What this device is to Fnord and eHonda remotes is exactly what Firesheep is to Facebook, Twitter, and so forth. What’s important to realise is that the device is not the problem. It does allow the average schmoe, or incompetent prankster, to steal your car (or use your Facebook account), but the very important point is that the car thieves already knew how to do this. Firesheep didn’t create a problem, but by making it trivial for anyone to exploit the problem, they generated a lot of press for the problem.

What the Firesheep guys wanted to accomplish was for Facebook and Twitter and so on to stand up and, in essence, say Whoops, we clearly need to make better remote controls for our cars. (It’s actually much easier for them than for our imaginary auto manufacturers, though.) True, Firesheep does expose users to pranksters who would not otherwise have known how to do this, but the flaw was already trivial to exploit by savvy attackers, which means that people who maliciously wanted to use your account to spam and so forth could already do so.

Now along come the BlackSheep guys and say, Hey, that’s terrible, the Firesheep guys are giving away a remote that lets people steal other people’s cars!, and create a detection device to stop this horrible abuse. But of course that doesn’t address the real point at all, because the real point has nothing to do with using Firesheep maliciously, but to illustrate how easy it was to attack a flawed system.

This is stupid for several reasons:

  1. If BlackSheep gets press, it might create an impression that the problem is solved. It isn’t, of course, firstly because Firesheep wasn’t the problem to begin with, and secondly because BlackSheep only runs in, and protects, Firefox.

  2. People running badly protected websites like Facebook could use BlackSheep as an excuse not to solve the real problem, by pretending that Firesheep was the problem and that problem has been solved.

  3. Even as a stop-gap measure, BlackSheep is a bad solution. The right solution is for Facebook and Twitter and so on to force secure connection. Meanwhile, as a stop-gap solution, the consumer can install plugins like the EFF’s HTTPS Everywhere that forces secure connections even to sites that don’t do it automatically. This is a superior solution: BlackSheep tells you when someone’s eavesdropping on you; HTTPS Everywhere prevents people from eavesdropping in the first place.

    Let me restate this, because I think it’s important: BlackSheep is meaningful only to people with sufficient awareness of the problem to install software to combat it. To such people, it’s an inferior solution. The correct solution is not to ask consumers to do anything, but for service providers (Facebook, Twitter, …) to fix the problem on their end; but if a consumer does anything, BlackSheep shouldn’t be it.

As I write this post, I hope that BlackSheep will get no serious press beyond a mention on Slashdot. It deserves to be forgotten and ignored. In case it isn’t ignored, though, there need to be mentions in the blogosphere of how misguided it seems.

Let’s all hope that Facebook, Twitter, and others get their act together; meanwhile install HTTPS Everywhere.

haggholm: (Default)

Short version: Facebook sucks, Twitter kind of sucks, and if you use Firefox, install this add-on now.

Long version:

A Firefox extension called Firesheep made the news recently in a pretty big way. In a nutshell, it’s a Firefox addon that allows just about anyone to walk into, say, a coffee shop with an open wireless network and capture the cookies¹ authenticating everyone in there with Facebook. Or Twitter. Or…well, any of a great number of sites where you probably don’t want just about anyone to read everything you or your friends have posted, or pose as you.

Of course, Firesheep didn’t create this problem. What it did—and what it was designed to do—was to highlight the problem by creating a tool so easy to use that anyone could do it. Malicious hackers who actually want to steal your credentials and do evil things with your accounts could already do it. The Firesheep people wanted to give the problem more press, and they succeeded. It’s as if all car alarms had an easy way to disable them that the car thieves already knew; Firesheep gave it publicity by selling a simple toolkit to average consumers. (It’s also not, strictly speaking, limited to wireless networks, but open wireless networks are a huge opportunity.)

I won’t go into the technical details in great depth at this point. Suffice to say that you can connect to many servers, including all these big social network, in a secure way. The problem is that it’s not the default. Use a secure SSL connection (you’ll recognise it by the https:// in the URI) and this particular problem goes away.

If you happen to use Firefox, there are actually plugins that you can install that force the browser to use the secure versions of various websites. You can look up ForceTLS on your own time if you are so inclined. I will personally go with HTTPS Everywhere, published by the EFF. Simply install it, and Firefox will connect to Facebook, Twitter, &c. in a way that cannot be eavesdropped upon.

Of course, it has limitations; see the FAQ. Ironically, LiveJournal is an example of a site that doesn’t actually work because it doesn’t really have a secure version.

The real, long-term solution is obvious to anyone who understands the problem and knows anything about authoring websites: No credentials should ever be transmitted over unencrypted channels (except such credentials as are specifically intended for it, like the Diffie-Hellman handshake and what not). If you ever write a webapp with authentication, only allow authentication over SSL, and only send the cookie over SSL. If you’re worried that it’ll be too bandwidth or CPU intensive, by all means serve static content—anything that you explicitly, whitelist-wise know to not matter in terms of security—over plain HTTP, just on a different subdomain or domain or what have you than the one on which you serve the cookies. (There are other reasons, anyway; my own website already serves static content from a separate subdomain because caching rules are very different, and static content can be served up by Lighttpd rather than an Apache instance that runs everything through mod_wsgi and Django.)

Unfortunately, this is not a solution that the client can effect: It’s up to website and service authors. As a user, the best you can do is stick to SSL whenever possible.

¹ The cookies, not the credentials. Even a site with unsecure cookies may transmit passwords securely. Conversely, even if you login through a secure https:// login page, the cookies may be served up for anyone to sniff and steal.

haggholm: (Default)

Today’s xkcd is a good reminder of the dangers of password reuse—and a good time to remind whosoever might stumble across this that while it’s hard for humans to invent and remember different passwords for different services, it’s easy for software to do it. I use SuperGenPass (download here), which lets me type in the same password everywhere but actually use a different one on every site: Google does not have my LiveJournal password; LiveJournal doesn’t have my Facebook password; Facebook doesn’t get to see a password I use anywhere else.

Password Reuse
haggholm: (Default)

It’s been eight months since the last time I thought about getting a new phone, and I have not grown any more fond of my existing phone in the interim. The reason why I haven’t bought a new one is that when I last looked, exactly no phone at all met my criteria.

  • Must handle generic IMAP and handle it well
  • Must have a GPS (because Google Maps + GPS is too good a combination to forego)
  • Must be able to intelligently access my Google calendar

Some very major nice-to-haves:

  • Able to update/synchronise my Google calendar
  • Able to utilise an LDAP address book for emailing; ideally: Synchronise contacts
  • Able to synchronise contact info with a Linux computer (not so important if there’s a good LDAP solution)

The reason why I ended up not buying a new phone was that none of them really met my criteria. In particular, the phones available at the time (the iPhone version whatever, and the Android G1) both had a reputation for poor IMAP support, and there was official support for LDAP on exactly no devices. Now it seems that the more recent iterations of the iPhone OS do come with LDAP support. The notion of being able to maintain a single phone-and-address book (which I can, in addition, easily back up: in fact, I do so regularly with a cron job) is appealing. The notion of using a single Google calendar instead of separate calendars on my phone and my computer (which leads me to check both less frequently since neither is all that helpful) is equally so.

I’d rather go with an Android device since I prefer the openness and I hate to be associated with Apple fanboys even by coincidence, but it does seem like the iPhone does more to meet my requirements right now than does any Android platform on the market.

I guess all it comes down to at this point is, one: How good is that LDAP support? —and two: How nicely does the iPhone interact with vanilla IMAP accounts?

haggholm: (Default)

Having adopted SpiderOak to manage files that are too large to conveniently manage in my version controlled home directory, as well as files with data sensitive enough that I’d rather not put it on the same server as my webapps, the time has come to finally get off my arse and clean up that mess.

While there were files that were always clearly too large to commit to my repository and were only ever backed up remotely once I adopted SpiderOak as a solution, there were also files that I would now clearly backup with SpiderOak, but which I then committed to version control. There are also some files sufficiently sensitive that I should not have committed them, but I did… Time at last to delete them.

Of course, deleting files from most version control systems is non-trivial. My files sit in Subversion. To delete them, I have to dump the entire repository into a dump file, run this file through a filter to exclude specified paths, create a new repository, and import the filtered dump into this. A bit cumbersome, but ah well—at least it works. And it seems that, preliminarily, I am able to shrink the repository size by at least 54% 71% 78%.¹

While I’m messing around with this stuff, I just might switch to Mercurial. Hmm…

¹ From an original, hideous 2.4G to a somewhat less awful 1.1G 710M 521M. The Hg repository weighs in larger at 1.6G 965M 743M, for some weird reason (still about 33% 60% 69% smaller than the original subversion repository). —All as measured by `du -sch`.

haggholm: (Default)

Good thing: OpenSSH 4.9+ offers Match and ChrootDirectory directives which can filter users by group and chroot-jail them to their home directories.

Weird thing: ChrootDirectory requires that the full path up to and including the home directory be writable by root only. This means that the users must not have write permissions on their own home directories. As far as I can tell, I can only make this useful by creating user-writable subdirectories inside. (This works fine for our purposes, but is, well, sort of bizarre: Home directories that the users cannot write to!)

Bloody annoying thing: RHEL 5 comes with, I believe, OpenSSH 4.3. Versions with ChrootDirectory have been around for years, but naturally RHEL is a few more years behind, so I have to create my own package to get a chroot capable SSH setup. It’s not hard, but it is annoying and adds a maintenance burden.

haggholm: (Default)

The motivation

Unhappy with my cable connection’s performance at various times, I looked around for a better ISP. I decided to go with TekSavvy, for a couple of reasons:

  1. They offer a service I like at a good price. For the same price as I pay for my 7.5M down/512k up, I get 6M down/1M up with TekSavvy. Note that speed can be deceptive: Shaw is cable, and cable is shared with other nearby cable subscribers and tends to slow down during peak usage hours. The DSL line is all mine. And the upload speed is hugely greater, which is important to me—I never saturate a multi-megabit downlink, but when I backup large amounts of data or push data to my webserver, poor uplink speed is sometimes frustrating.

    Both my old Shaw service and my new TekSavvy service impose monthly total bandwidth caps. Shaw: 60 GB. TekSavvy: 200 GB. (In each case, GB, not Gb.)

  2. TekSavvy does not throttle your traffic. In fact, they are actively lobbying advocates for net neutrality and so forth. This is of course in their own best interest as a lower-tier ISP (they rent lines from companies like Bell and Telus, and don’t want the line providers to throttle their traffic), but the fact remains that by buying service from TekSavvy, I am paying people who advocate for the openness and regulations I want.

    Shaw, of course, throttles traffic like torrents, and throttles small companies and consumer interests whenever they can.

  3. I’ve used them before (in Québec), and left with an impression of good value for my money.

The service

I just got my DSL service, so it’s too early to comment on stuff like reliability (obviously I expect no problems—if I did, I wouldn’t have switched). The one minor thing worth remarking on is that as far as CNET’s bandwidth tester is concerned, my 6M DSL service is actually slightly (8.6%) faster than my 7.5M cable service—so don’t let that number scare you.

While I haven’t really tested this, so that it is subjective, I am surprised to find that my browsing feels much, much snappier than on the Shaw connection. I would guess that while the bandwidth is similar, my latency is much better.

The misadventures and the support

It is crucial to understand that TekSavvy is an internet service provider. They provide internet access. What they do not do is own, install, or manage the actual lines coming into anybody’s home. The lines are owned by the big telcos, like Bell or (as in my case) Telus. Any work on line installation is done by Telus agents. If you live in Vancouver (else substitute your local line provider as appropriate) sign up with a lower-tier ISP like TekSavvy, and you need DSL activated in your home, they will send a work order to Telus. Telus technicians activate it, Telus technicians may (or may not) visit your home.

It is perhaps interesting to note at this point that Telus also offers DSL service, so when they provide line service to TekSavvy customers, they are doing work for their direct competitor.

With the preliminaries out of the way, here’s a timeline:

  • November 4: I place an order with TekSavvy. I receive a confirmation email telling me that my up to 6. Megabit dry copper loop residential DSL service is set to be tentatively activated on 11/11/2009 anywhere between the hours of 8am to Midnight, though warning me that my activation date could be delayed up to 10 business days due to unanticipated circumstances.

  • November 5: I receive a phone call from TekSavvy. Because Telus has a worker shortage in BC, they—Telus—have delayed my scheduled activation date until December 7.

  • November 6: I receive a phone call from TekSavvy. They misread something and the proper rescheduling date is December 1, not December 7.

  • December 1: I work from home because Telus may need access to my apartment to test things. I don’t like to work from home as I am less productive, but do it—it’s just this once, after all. I see neither hide nor hair of any Telus technician, nor do I hear from them.

  • December 3: I receive a phone call from Telus informing me that my DSL service has been activated. No explanation is given as to why it was delayed past the (already postponed) scheduled date.

  • December 4: I spend a lot of time on the phone with TekSavvy support. My ADSL router came with some weird factory settings, so it takes some resetting and configuring before the real problem emerges: TekSavvy and Telus use different DSL authentication mechanisms (or some similar DSL jargon), and the Telus techs configured my line for Telus DSL service, not TekSavvy. TekSavvy can do nothing but file a ticket with Telus, and since Telus support works Monday through Friday, 8 am to 8 pm (unlike TekSavvy’s 24/7), I will have to wait until after the weekend to see any change.

    Note: Router settings are not TekSavvy’s fault, since I bought my own DSL modem/router rather than getting one from them. All the same, they did help me troubleshoot it, even though it was not one of their recommended/official models, so point to them.

  • December 7: Wonder of wonders! The DSL connection works! Unfortunately, the line rate is not what I’m paying for, just 1536 kbps down/637 up. (Note: Line rate, not measured connection rate.) It should be about 6000/1000. I call TekSavvy support again. I’m told that this is another configuration error by the Telus techs—one they make pretty often. Once again, it’s not within the power of an ISP to fix this; has to be done by the company running the lines. Another ticket to Telus.

  • December 8: It works! I actually have the line rate I’m paying for!

Obviously, this was all a bit of a pain in the arse.

Note that apart from a minor error (a misreading that affected nothing and was quickly corrected), it appears that all of this was the fault not of TekSavvy, but of Telus. I gather that my case was atypically difficult by quite a wide margin, but I can’t help but wonder if Telus are just not very interested in making things easy on customers of their competitors, regulations be damned…

On the other hand, TekSavvy support was wonderful. When I called for help (which, as mentioned, is available 24/7),

  1. I spoke to competent technicians. TekSavvy does not bother with a call centre full of support staff reading from scripts. Their tech support consists of people who actually understand technical issues.

  2. I was not feel condescended or talked down to. This is a huge deal to me, as I am very sensitive to it, and as my technical expertise makes a lot of the support scripts many companies use sound like baby talk. The TekSavvy guys, of course, stepped me through the bits I was clueless about, but passed briefly over the parts it was clear I knew. This requires respect for the customer’s intelligence when any is exhibited, and it requires competence (a support monkey reading a script but lacking actual technical expertise can’t possibly judge mine, and won’t know how to address problems in a manner not dictated by the script).

    This alone would make me happy to recommend TekSavvy. I usually find tech support calls a frustrating and sometimes enraging experience. Here, in spite of the annoying frequency of issues caused by Telus, it was more like friendly banter with people I could relate to.

  3. They called to follow up on every issue. They didn’t have the power to directly address a lot of the problems, as they had to go through Telus, but they did make sure that I was kept in the loop as things progressed.


Based on my experiences thus far, I’m happy to recommend TekSavvy. The service itself seems good, the tech support is excellent, and the company supports openness and net neutrality rather than consumer-crushing and unethical business practices. I’m happy to give my money to these guys.

I had a huge delay and numerous issues in getting the service working, but as far as I can tell, all of them were due to Telus, and I wouldn’t want TekSavvy to suffer for that. And, of course, odds are pretty good that you won’t be hit with every Telus fuckup in the book, like I was.

The only remaining items on my ISP agenda is to buy a longer phone cord so I can move the modem to where I want it, and to cancel my Shaw subscription.

haggholm: (Default)

Your hardware has changed significantly since first install, it informs me, so you have to re-activate Windows. Would you like to do so now?

Sure, I installed some new devices…but they were virtual devices.

haggholm: (Default)

I just bought a new, larger hard drive, and today I installed it in my desktop computer. I bought this computer from NCIX and, in a moment of pure indulgent laziness, paid them to assemble it for me rather than assembling it myself. Today I had to open it and move things around—and oh, but my earlier laziness came back to bite me in the ass.

The case has two 3.5" drive cages. In spite of the case manual’s suggestion that one use the lower cage “for optimal cooling and noise reduction” (or something to that effect), both pre-installed drives were in the upper cage, which sits directly in front of the video card. By “directly” I mean that they were so close that the power cord of the lower drive was physically touching the card. By “physically touching” I mean that it was, in fact, blocked by the card, so that I had to remove the video card to unplug the drive. To remove the video card, I had to unplug the system power cord. …And so on.

And of course all the cords were zip-tied together so tightly that the drive cage could not be removed without unplugging the drives, and the lower cage could not be reached without cutting numerous zip ties. And no power connectors were left for expansions, so I had to dig through boxes to find spares; ditto SATA connectors. As a bonus, the upper and lower drive cages use different attachment systems (the upper cage has drive bays, the lower does not), and the necessary screws were of an unusual type, so I had to find those too (this one isn’t the installing tech’s fault, though).

I have never spent so much time just physically installing a hard drive, but on the bright side, I expect that moving all the drives to the lower bay will significantly improve system cooling (since the hard drives were between the front air intake and the video card, sigh), and the case could use the cleaning it got; it was a mite dusty, if you’ll pardon the pun.

Now, of course, grub reports an error, presumably because the drive order has changed, or something (the BIOS setup correctly reports all three HDDs). I don’t know, and I lack the energy to work at it tonight. Hopefully tomorrow night will be a quick fix to get the system running rather than something horribly wrong.

haggholm: (Default)

If you loathe the Adobe Acrobat reader half as much as I do, you might be happy to learn that Evince, the standard PDF reader for the GNOME platform, now has a Windows version (get it here). I have not used this Windows version myself, but expect good things. (This latest version of Evince also added support for the one feature I was missing: Displaying annotations.)

Evince is what made me stop hating PDF documents—it does nothing fancy, but displays PDF (and Postscript) documents cleanly, quickly and efficiently. Searching for text in a document resembles, well, searching for text in a text document rather than asking your computer to reindex all its documents while attempting to compute a cure for all cancers, or whatever Adobe make their reader do to slow it down to the startling degree I have come to expect. (If—if—this sounds like an exaggeration, it’s because (1) the Adobe reader for Linux is even worse than the Windows version, and/or (2) they have improved the Windows version since I last used it, reversing a long-standing tradition of adding more and more features that nobody uses except your CPU.)

More seriously and less sarcastically, Evince was the first application that really struck me with a “less is more” sort of beauty—an object lesson in UI design, if you will. It’s there to do one thing: Let me view PDF and Postscript files. It has almost no buttons, options, switches, or fiddly bits. And yet, in its stark simplicity, it was so vastly superior to the obvious alternative that it made me view PDFs as a good format for portable documents rather than a plague upon the internet.

haggholm: (Default)

I recently decided to try SpiderOak to backup documents that are either too large, or too sensitive to conveniently keep in my subversion repository. I signed up for one month at a cost of $10 to get 100 GiB of space. They offer 2 GiB completely free, and I can highly recommend this for storing smaller amounts of data (I would, except that I have, use, and like subversion for this).

Initial impressions: No problems with the packages¹ or UI. I can only assume that the Windows and Mac versions are identically smooth (with most products, after all, Linux gets the least attention and support). I had some issues where my upload speed would slow to a crawl, then a halt…but I think this is more due to Shaw, whether because the cable network gets overloaded at certain times of day, or because they throttle my connection.² However, this was not immediately obvious, so I asked SpiderOak tech support, just in case. Their response was prompt, friendly, and voiced in a way that didn’t seem to assume I’m an idiot (I’m very sensitive to perceived condescension). Thus, while SpiderOak’s support didn’t solve a problem for me, because there almost certainly was none on their part, their response seemed promising: Based on preliminary data, I like their customer support.

So far, I’ve backed up about 9 GiB of data. Of course, uploading this on a cable connection with a maximum of 0.5 Mbps upload rate, it’s rather painfully slow, but once I have the data uploaded, I won’t have to repeat it… Unlike services like DropBox, SpiderOak lets me specify which directories I want to upload (and exclude subdirectories, if I so desire), so I can keep my files organised how I want them. It also turns out to be trivial to synchronise files between different computers. Their FAQ has all the details. It’s as simple as it sounds, and probably simpler.

As you can probably tell, I’m very happy with the service so far, though I’ve only used it for a few days yet. It’s quick (except for my upload speed…), easy, and I like their security model a very great deal. Based on my limited experience, I would recommend it—especially to those among you who don’t currently have an online backup service. Why not? You can get 2 GiB of safe, automatic backup for free! And if you need more (as I do), $10 a month or $100 a year gets you 100 GiB, while most other services I’ve found charges the same for only 50 GiB of space.

Again, of course, if you decide to sign up, use my referral link and give me some extra space for free…

¹ When I installed it on Ubuntu Karmic, there was no “Ubuntu Karmic” package, but the Jaunty package worked fine. A few days later, a Karmic package was available—this was within perhaps a week of the initial Karmic release, mind. I believe the package was actually the same, though of course it’s reassuring to click a link with the correct legend.

² My solution? I’m switching to TekSavvy, who offer twice the upload speed and about the same download speed at a similar price, never throttle anything, are less likely as an ADSL provider to suffer congestion than cable, and are champions of net neutrality and deserve my money more than Shaw does. On the very remote chance that my upload issue was SpiderOak’s fault rather than Shaw’s, I expect I’ll be happy with TekSavvy. (On the very, very remote chance that I’m not, I’ll just switch back.)

haggholm: (Default)

After downloading this morning’s find, my first thought was I must never lose this!—so I spent some time thinking about backup strategies.

Most of my data are backed up by shoving them into a subversion repository containing most of my home directory. This is a techy, nerdy way of doing things that works very well for some data, and gives me the ability to perform very sophisticated change tracking.

It works rather poorly for some data, though. In particular, it’s not ideal for storing large sets of binary data…like an 8.1 GiB repository of scanned books [embedded] in PDF format (or like music, or video files). It also has another weakness, not intrinsic to the mechanism but significant in my usage: Because my subversion repository is housed on the same server and server account as my websites, I’m not 100% comfortable uploading very sensitive data. It’s a shared server (although I have of course disabled read permissions for other users), and it runs, with my user priveleges, my own webapps—which are of course no more secure than I made them.

So I decided it was finally time to look into alternative backup strategies. I’m quite happy with subversion for e.g. text files that I modify, my projects’ source code, and so forth, but for photos, videos, music, and large downloaded collections of RPG supplements that I’ll never edit anyway, I want something else. Having heard the name bandied about, I of course looked into DropBox, which looks quite OK. I did spend some extra time looking around, though, and came across a DropBox competitor I had not heard of: SpiderOak.

Both DropBox and SpiderOak offers a free 2 GiB storage account with paid upgrades to 50 GiB or more. Both offer secure, encrypted transport, synchronisation between multiple computers, etc. However, SpiderOak offers a few features that DropBox does not, some of which are quite interesting.

  • Sharing data in place rather than having to stick them in a dedicated directory; I can backup my documents directory, for instance, instead of having to create and use .DropBox/documents.
  • “Zero knowledge” security means that data are stored encrypted, and SpiderOak does not store my password. This is fantastic and wonderful (though it does come with the caveat that if my password is lost, it cannot be retrieved). No matter what I upload, encrypted transport means that no one can eavesdrop on it, and encryption means that no one, not even SpiderOak employees, can get at it. I can be as comfortable storing even very sensitive data, like passwords and personal information, in SpiderOak as I can on my local computer (however comfortable you think I should be with that).
  • Extra storage at half the price is a pretty obvious advantage. $10/month gets me 50 GiB at DropBox or 100 GiB at SpiderOak.

Client software is available for Linux, Windows, and OS X, so you can share data across platforms. (This is also true of DropBox, of course.) Unlike DropBox, much (though not all) of the client software is open source, and SpiderOak claims that they are moving towards a full OSS client. (They’ve already shared some code.)

On paper, then, SpiderOak is about as close to perfect as it can get for my needs. What remains to be seen is just how smooth and seamless the experience turns out to be when I start using it (it has a reputation in some parts for being a bit of a resource hog; to me, that sounds like a small price). If it’s as good as I’m hoping, I will recommend it to everyone I know.

If this convinces you to sign up, please use this referral link to give me some bonus space in return for my time writing this up. (Pretty please?)

haggholm: (Default)

From this story, Strong Passwords Not As Good As You Think, by some commenter called Rob the Bold:

According to the article (cited by the citation):"Users are frequently reminded of the risks: the popular press often reports on the dangers of ïnancial fraud and identity theft, and most ïnancial institutions have security sections on their web-sites which oïer advice on detecting fraud and good password practices. As to password practices traditionally users have been advised to . . . "

-Choose strong passwords

-Change their passwords frequently

-Never write their passwords down

I would suggest that this is a case for the popular quip: "Pick two".

Personally, I can’t be arsed to change passwords frequently, which makes unique passwords all the more important: Since I rarely change them, I need to make sure that if somebody steals all the passwords from site A, that doesn’t compromise my accounts on sites B through Z. Have I plugged SuperGenPass lately?

haggholm: (Default)

Since I’m on a security spree, finally getting my arse in gear to do what I should have been doing for a long time, I decided to also generate a new PGP key that actually matches my current email address and perhaps (wonder of wonders) actually sign email by default. I may or may not bother about encryption; it’s certainly a nice-to-have, but I’m trying to ease into good habits, and I want to read up more on backing up public keys¹.

What this means is that I am curious about what mail client you use, because people reading this post comprise a pretty hefty chunk of all the people whom I want to be able to read my mail. Since some mail clients (notably Microsoft clients) are a bit iffy when it comes to features like PGP/MIME, from what I’m told, it would be very nice to know what I can rely on recipients being able to receive…

[Poll #1420360]

¹ Questions include:

  • How do I back up all my known public keys to begin with? —Automatically, if you please. If I have archived, encrypted emails, I would very much like to keep keys around so I can read them…
  • What happens when somebody expires a key, and I sync with keyservers? Does it stay in my keyring by default? What about revoked keys?
haggholm: (Default)

…And by you I mean all of you, so please at least take the time to read and think about this. Don’t worry if there are a few technical bits thrown in here and there; the message should be quite clear.

I have been putting off securing my data for much longer than I really should have. I am not, by nature, a paranoid person, and when it comes to high-powered encryption solutions, I agree with Randall Munroe of xkcd. I don’t need 4096-bit encryption, I am not going to worry about forensic analysis…I do not live in Iran. Someone said, and I agree, that

Alltogether, encryption of /home and /tmp prevents someone to access your private data by just using a Live-CD with your computer.

I consider something secure, when the effort to bypass or break it exceeds the benefit you get from breaking it.

But I do care enough that I want my data encrypted, and you should too—especially if any of the following applies to you:

  • You use a laptop. A user account password prevents somebody from just logging in as you, and is of course a must-have, but account passwords won’t help you at all if your laptop gets stolen, because all anyone needs to grab all your data is a rescue or install disk.
  • You use your browser, mail client, etc., to save your typed-in passwords or logged-in sessions.
  • You use only a small set of passwords, so that having one password compromised impacts you in many places. Actually, if you do, read this and start using SuperGenPass.

As it happens, all of the above apply to me, and I know the risks full well, so it’s hard to justify the fact that I have gone so long without encrypting my data. In all honesty, it’s sheer laziness. At least I am catching up now…

The biggest danger is that if you have a laptop and it gets stolen, somebody could use a combination of saved passwords and password reset mechanisms—after all, they have access to your email account!—to break into virtually any service you have electronic access to. This is not just about somebody reading your private letters (bad enough); this is about somebody able to use any electronic service you can use, possibly with the exception of your bank if their security model is good. Of course, this applies to desktop computers as well, in case of burglaries, but I consider the likelihood of a break-in to be much lower than the risk of somebody grabbing my laptop off a café table while I have my back turned, or somebody stealing my backpack, laptop and all.

I will reiterate something Jeff Atwood said, because it’s important:

  1. Number one with a bullet: your email account is a de-facto master password for your online identity. Most -- if not all -- of your online accounts are secured through your email. Remember all those "forgot password" and "forgot account" links? Guess where they ultimately resolve to? If someone controls your email account, they have nearly unlimited access to every online identity you own across every website you visit.

  2. If you're anything like me, your email is a treasure trove of highly sensitive financial and personal information. Consider all the email notifications you get in today's highly interconnected web world. It's like a one-stop-shop for comprehensive and systematic identity theft.

I’m not here to tell you how to encrypt your data, because I don’t know how to do it in Windows and I don’t know how to do it on a Mac. (I’m told, in the latter case, that it is easy.) I am here to tell you that you should encrypt your data! —And if you choose not to, be aware of the risks.

One thing should be added: If you encrypt your data, backups are critical. Of course, backups are always important; I would hate to lose years of work, correspondence, important data, tax files, and so on, due to a hard drive failure—or, say, an apartment fire that destroys both my computers, which is quite bad enough without data loss on top of it.

But with encryption, it’s even more important. If a regular, unencrypted file system gets damaged (software error, crappy old hard drive, …), your OS can probably cope with this and recover pretty much everything you care about, because the on-disk format is well known and understood. Encryption throws a $5 wrench into the works here, by making the on-disk format extremely obscure: That’s the whole point, after all. This means that if your encrypted file system gets damaged, there’s a significantly higher risk that all your data become unreadable. (For example, if you use Linux/LUKS, like I do, and the metadata sectuin containing the master key gets damaged, the partition is lost.)

I didn’t think twice about this, because I have a reasonably solid backup strategy in place (everything I care enough about is synchronised with a remote server). If you want to encrypt your data but don’t have a backup solution in place, though, you should come up with one first.

If you’re using Linux, you should set up encryption when you install it. (Well, you should do this regardless of your OS, but this is a Linux-centric section.) With Ubuntu, it seems extremely easy, but I wasn’t thinking about it when I got my new laptop (I was too excited about a new toy, and having a laptop I could actually use), so I had to convert to an encrypted system after the fact.

Most importantly, I am encrypting my /home partition, where all my data reside, using LUKS (referring to this guide). I consider this by far the most important part—it’s where all my data reside, all my cached passwords could be stolen, all my email is backed up. It was not at all difficult—the only problematic part is that I needed to move the data aside in order to encrypt the partition (I don’t know of a way to encrypt it in place). For this reason, I have yet to do this on my desktop computer: I have no partition large enough to hold all the data!

I also encrypted my /tmp and swap partitions, where temporary data are kept, because cached passwords, sessions, etc., could potentially be retrieved from thence (here, I used this guide). Because they are (or can be) cleared on reboot, I opted for the recommended solution of using /dev/urandom as the key file: The password is randomly generated on boot, different every time, and thus pretty damned secure. I am told I should also encrypt /var/tmp, which is a bit trickier, because I don’t want to have to type in two LUKS keywords on boot. How important is it to encrypt /var/tmp? I gather KDE caches data there, but I do not use KDE. I suppose I may generate a keyfile and store it on the encrypted /home partition, or hell, even symlink it to a /home/cryptovar directory—on rare occasions when /home is not available, I don’t imagine I’ll care much about missing /var/tmp! Thoughts?

haggholm: (Default)

I’m pretty bad at password management. I don’t have a great memory for complicated strings of random characters—in fact I don’t have a great memory at all. In very rough terms, I use a set of passwords like

  1. A secure “standard” password for sites and services I trust (with some minor variations)
  2. A modified, more complicated version of the above for root passwords etc.
  3. A different password for desktop and laptop user login (…these should be different)
  4. My old “standard” password, now demoted to use on sites I don’t really trust to store my password securely
  5. A throw-away password (this one’s actually a dictionary word!) for untrusted services where I don’t care if they get hacked but where I need a password to use them

This is a hell of a lot better than using “p4ssw0rd” for a password wherever I go, but I do knowingly commit a mistake shared by many: I reuse passwords all over the place, and while I try to make a rough judgement call (do the people running this site seem like the sort to store my password securely hashed and salted, or in a reversible form, or even [shudder] in plaintext?), that’s a very fallible call to make. Also, and this is very bad, I often let my browser save my passwords. That’s very dangerous. It’s a product of sheer laziness.

Of course there are lots and lots of password managers designed to create and manage sets of better passwords—there are ones for Windows, ones for Linux, ones for Mac, and a fair number of cross-platform managers. But from my point of view these all share a number of weaknesses:

  • I need to install an application on any computer where I wish to use these passwords.
  • The application needs to be cross-platform and have good usability on at least Linux and Windows.
  • Most of the time I don’t worry about programs going out of vogue or dying—by the time a project dies in the Linux world, I’ll have long since moved to another—but a password manager needs staying power, because I can’t afford to lose all my passwords.

SuperGenPass is the best password manager idea I’ve ever seen. If you’re a non-technical reader, the best advice I can give you is “go here and install it”.

For the slightly more technical reader, here’s why I like the idea so much:

  1. It’s implemented in Javascript and runs as a browser plugin. The web is where I should use a profusion of passwords, so this is where I need quick, easy access. And it certainly looks easy: Type in your master password, click the “SuperGenPass” bookmarklet button, and voilà!
  2. It uses a hash of your master password and the domain name for a password, so every domain gets a unique password.
  3. Because each domain gets a unique password, it’s relatively safe to let your browser save the passwords. The usual vulnerability inherent in saved passwords is there, of course, but you only compromise one site at a time—never the master password.
  4. If the next version of your browser breaks compatibility with the plugin, the mobile version will let you retrieve your passwords. It’s a single, plain page with embedded Javascript. I can save it on my harddrive for easy password retrieval.

I can only see two obvious security caveats, one of which is easily negotiated, one of which looks like a fundamental and inevitable limitation of the design (and of its laudable goal of user friendliness). First, the fixable one:

SuperGenPass also provides some degree of phishing protection. Suppose you receive a phishing attack—for example, an e-mail that purports to be from Amazon but is actually from a malicious hacker trying to steal your password. It sends you to a page that’s set up to look like Amazon.com and has a similar URL (say, “www.amaz0n.com”), and includes a login form. Using SuperGenPass at this malicious Web site with your master password (“cornflakes”), your generated password is “uc15yrcmqI”. Compare with the previous example: though the master password is the same and the domain name is only slightly different, SuperGenPass generates a completely different password. Even if you are fooled by the phishing attack and attempt to log in to the impostor website, you haven’t sent your real password.

That’s fine, as far as it goes, but nothing prevents the website from harvesting your master password from the password <input/> before it’s hashed, and saving it via AJAX. If they know that you’re using SuperGenPass, they can then use your master password to generate all your other passwords. That sounds alarming, but I don’t think the odds of falling victim to this very specific phishing attack are very high. Additionally, there is an easy workaround for this: You can add a salt to the bookmarklet, which is not entered into anybody’s <input/>.

The second problem is that the algorithm uses the domain name as a salt for the hash…and that’s a pretty weak salt if a determined attacker wants to use something like a rainbow table attack: The salt is known. By design, SuperGenPass cannot use nonce values (it would compromise its excellent portability). Nor does the extra salt mentioned above help you here; it’s just part of your master password. (The hacker would crack your password+salt, not just your password.) If you are worried about somebody stealing your password and running that sort of thing on it, well, you’ll want to use more than one password. It never can hurt to use a separate password for extremely important sites, such as banking and email. (Yes, email should be considered extremely important: As Jeff Atwood has pointed out, anyone who hacks into your email can gain access to almost any other service you use by using the password reset function.)

But if these are weaknesses of SuperGenPass’s security, it is still a vast improvement on using only one or a small set of passwords. If I install this and reset a few passwords, I can use the same master passwords as I do now and gain a unique password for every site I use; even in the worst-case scenario of somebody running a rainbow table attack on my passwords (and why would anyone want my data that badly?), the worst-case scenario is gaining access to one of my master passwords. Right now, when for all I know some forum could be storing that password in plaintext, the barrier of entry is abysmally low.


RSS Atom

Most Popular Tags