Posted by mikel
on June 26, 2003 at 11:50 AM PDT
Jaron Lanier's pheotropic computing, the early Internet, XML, and other stuff
This rather long ramble is something I've been getting around to writing for a long time. So, it may be incoherent, but at least I'm getting it off my chest.
The fundamental idea behind the articles that Jaron Lanier has been publishing in the past few months is that protocols should be designed around the idea of "what are you ttrying to tell me to do" , rather than "that was an illegal operation" (404: not found). I hate the name "pheotropic computing", but his ideas are extremely important.
One problem I see with a lot of computing projects--and I particularly see it in projects coming from the XML community--is an attitude I can only describe as "I've done my job when I've told you the document doesn't conform." I've been rather wary of XML for years now, for a number of reasons, but that's one of them. "The document doesn't conform" is a dead-end, both for developers and users. It's a way to build software that's brittle and difficult, if not impossible, to use.
It's instructive to think about the early days of the Internet, and Jon Postel's dictum "be strict in what you send, and tolerant in what you accept." If the Internet had been built around the idea of strict conformance to protocols, it wouldn't have gotten anywhere. In fact, if you've looked at any of the early protocols, it's amazing how much real crap goes on under the surface. In an unpublished chapter of the unfinished "Internet Application Protocols", Eric Hall describes some bizarre behavior in Telnet option negotiation: one server would lie about the options it would accept, because it was the only way that it could get a particular client out there to tell it about some of its capabilities. After it found out, it negotiated the options it really wanted. That's the kind of computing that takes figuring out the right thing to do seriously--not just saying "sorry, I don't accept those options, goodbye." There are weird nuggets like that in just about all of the old protocols: odd behaviors in which the client, or server, was really trying to figure out what the other end of the connection wanted it to do, because what it was saying really didn't make sense.
Now, I'm not claiming this is a good way to write software. In part, it was really because nobody really knew how to define and read protocols back then. The inventors of the Internet were still learning. But perhaps we've gotten too good at it. Having learned how to make networks work, have we through out the biggest lessons? HTML was successful precisely because it (and the early Web clients) was tolerated all sorts of malformed junk. (Do you think I'm typing my /ps now? Ha.) If HTML was born as XHTML, and if browsers demanded conforming documents, we wouldn't have a better web--we wouldn't have a web at all. Ideas like Schema, RDF, Docbook, and many other standards in the XML world really fail because they are entirely too specific, they don't tolerate ambiguity, and there's no mechanism that I'm aware of to handle the simple question: "what do you really want to do?"
I don't know how to write software (either end-user software or low-level protocols) that figures out what the users really want to do. We won't get there by special-casing all sorts of weird situations, like people implementing the early Interet protocols. But we also won't get their by defining, in ever more precise terms, exactly what input we want and how we want it. "How do I figure out what you really want me to do" is clearly the question we ought to be asking.