So, my computer is back up. I ended up having to do a complete reformat and reinstall OSX. Luckily I had that backup from the day before of all my personal stuff. And, luckily, I had kept copies of most programs that I'd need installed. So, it went remarkably smoothly.
But there seems to be loads of problems with technology this week. At least in my world. Server problems, connectivity problems, hardware failures. My airport base station keeps rebooting, covad was down most of the day in all of L.A., one of my client networks got locked out from the backbone, my car overheated on the freeway and is in the shop. And I don't really have time for any of it. People to see and places to go.
But it makes me philosophize about the fragility of technology that is designed with single points of failure. And thinking about how to arrange things so that it doesn't matter.
If you access your e-mail from hotmail or yahoo, it isn't important which computer you use. If one breaks down, you'll see the same thing from any other. If your browser isn't working, you can use another one. And the servers they use will by necessity have to be distributed, so it is fairly limited what will bring them down.
Accessing anything in a browser is a very fault-tolerant approach. Just like I can check my voicemail from any phone, and I can switch my SIM card to a different phone. Same principle.
However, creating fault tolerance at the other end is rather hard for regular humans. If I'd be accessing something at the other end that there's only one of, like my own server, there's still a problem with things that can go wrong. There are things I can do about that, through database replication and file sync'ing programs, but it still doesn't mean I'd easily recover from any big server crash. Seamless failure recovery is difficult and expensive.
I still wouldn't think of trusting my archived e-mail or any other important information store to some company's website. I put lots of my personal relations into sixdegrees.com, but then they wen't bankrupt and lost all of it. Policies change, management changes, economies change, people make mistakes, lose interest, etc. I do trust quite a bit to my own webserver, because I know where things are and what I'm doing about making it reasonably reliable.
I'm sort of looking for some keys to widespread fault tolerance. Having several redundant pieces of something is certainly one of them. If I have several cars standing outside, it doesn't matter too much if one isn't working.
Easy conversion between different storage formats, and widespread adherence to standards - that would be another. Address book sync'ing would be an example of that. I can have the same address book on my PDA as in my mail program without too much trouble. Same principle as being able to save a song in MP3 format, and put copies in several different places.
The way routing works on the net is sort of a mix of these. There's plenty of different ways of getting between two points, and they all speak the same language. So the net can route around failure. Mostly. That still doesn't apply to my website itself. If the server is down, people see nothing. If the name servers aren't working, they can't find it.
It would be nice if one could put one's website in an automatically distributed and redundant "place", so that access to it wouldn't depend on any particular server being up or not. Like one can do with shared download files with BitTorrent. But I'd like a way of distributing a whole website or a database that way.
I'd like my data to feel like it is ubiquitous and non-local. Being there when I need it, but without having to go around worrying about where you put it and whether that is safe enough. And next I'd like that with the hardware too of course. You know, just speak into the air like Captain Picard, and the technology will be there to listen and give you what you want. [ Diary | 2003-07-09 23:59 | | PermaLink ] More >
|