Software is hard, an article/interview on Salon, based on Salon co-founder Scott Rosenberg's book Dreaming in Code. About, well, why programming is hard, and why it particularly is so hard to estimate how long a given development task will take. I certainly recognize that problem. The book's sub-title is "Two Dozen Programmers, Three Years, 4,732 Bugs, and One Quest for Transcendent Software". That's talking about Chandler, which is the particular focus for the book. Chandler is an ambitious project of creating a new way of organizing personal information. It is conceived, financed and managed by Mitch Kapor, the guy who invented Lotus 1-2-3. He hired the smartest people he could find, and was even prepared that it would take a while. But it seems to take forever to even get a preview release of some kind out the door.
The seminal work about why software is surprisingly hard is of course The Mythical Man-Month by Fred Brooks, which was about the development of the IBM360 operating system, which also took forever. Part of the wisdom from that experience was that as you add more people to a software project, the complexities of the communication between them grow exponentially. It becomes much harder for anybody to know what is going on, and very hard to be on the same page, so the majority of the effort is wasted in trying. Microsoft is of course a good current example. Very smart people, but the projects get so complex that it takes thousands of people and billions of dollars, and the product ends up being several years late, and it is an incoherent mess. Brooks' solution was that programming should be done by small teams, 3-5 people, with very well-defined roles, where one person would be responsible for the overall conceptual integrity of the project.
Scott Rosenberg doesn't seem to have anything terribly revolutionary to add, but he does formulate what he somewhat jokingly calls "Rosenberg's Law". Essentially, it is the (somewhat obvious) wisdom that software is only hard and difficult to estimate when one is doing something new, something that hasn't been done before. Which is right. The people who can do very disciplined on-time software projects usually can do so because they do something that has already been done. You know, if a customer needs a website with 10 pages and 5 graphics and a menu, and that's what you do every day, it wouldn't be too surprising if you can provide an exact quote on that.
The traditional "ideal" way of carrying out the software development cycle would be a sequence of studies, of feasibility and requirements, and an analysis of exactly what needs to be done, resulting in some specs handed to the programmer, who "just" needs to do it. That has never really worked, but, in principle, if it already is perfectly clear exactly what needs to be done, of course it isn't hard. It just never is clear, because people usually don't know exactly what they want before they see some stuff they don't want, and they have a choice. So that way of developing software is going out of style. It has to be more interactive than that. Shorter cycles, involving both the programmers and the users in reviewing the progress, frequently. Which tends to be how one does it nowadays.
Part of the trouble with software is that programmers are only having fun if they're doing something new. So, even if there might be an existing solution, which would be boring, but reliable, most programmers would prefer to make their own solution. And there's no ultimate formal way of doing something new which is partially unknown.
What is missing is really tools for modeling things to do. Oh, there are diagrams one can do, but that isn't it. One would need to model the real thing, no matter what it is. Which, unfortunately, takes about the same amount of work as doing the project. So, the general problem might only be solved at around the same time when most programming will no longer be necessary. I.e. you interactively work out the model of what to do in real-time, and when you're done, the software is done too. No separation between the specification and the doing. Would be great. There are systems that do that to some degree, but so far nobody's succeeded in making it general enough.
The ultimate software project would be to invent a system that makes programming obsolete, by making it so simple that anybody can do it, very quickly. Unfortunately that's a hard. [ Programming | 2007-02-05 15:21 | | PermaLink ] More >
|
|
Culiblog, Metafilter. In India there's a system where homecooked meals get delivered to your office every day. Apparently that works well.In Mumbai (pop +16 million) there are reported to be more than 5,000 Dabba Wallahs. A “Dabba” is a ‘tiffin’ or ‘lunch box’, a ‘Wallah’ is a man or the carrier. The Dabba Wallahs deliver home cooked meals, picked up piping hot each morning from suburban households, and distribute them to more than 170,000 office workers spread across the entire city. This system relies on multiple relays of Dabba Wallahs, and a single tiffin box may change hands up to three times during its journey from home to office.
No matter that few Dabba Wallahs can read or write, they interpret a series of colour coded dots, dashes and crosses on the lids of the lunch containers, indicating the area, street, building and floor of the Dabba’s final destination. The Dabba Wallah margin of error has been calculated at an one mistake in eight million deliveries, an accuracy that has earned the Dabba Wallah system a Sigma 6 rating by Forbes magazine. ‘Sigma’ is a term used in quality assurance if the percentage of correctness is 99.9999999 or more. Here comes the math: for every six million tiffins delivered, only one fails to arrive. This error rate means that a Mumbai tiffin goes astray only once every two months. Of course, that rate of success sounds greatly exaggerated, and I doubt it can be true, even if it maybe is a very efficient system. But Six Sigma is kind of an interesting concept. Actually that just requires 99.9997% accuracy, which would be 3.4 errors in one million, not 99.9999999%, which would be just one error in one billion, which sounds pretty unfeasible. 99.9997% sounds pretty crazy as well, if humans are involved. I suppose an automated banking system ought to certainly have that kind of error rate or better. [ Culture | 2007-02-05 15:45 | | PermaLink ] More >
|
|