Posted by mchampion
on September 29, 2004 at 6:01 PM PDT
"Use the right tool for the job" is a cliche that is hard to dispute. Or so I used to think before this new round of the REST vs Web Services debate started up. I want to hear how to do hard things RESTfully, not hear once again about the pointlessness of doing easy things with WS-*.
Not surprisingly, peace has not broken out in the ongoing dispute over web services specifications described in my last post . The continuing attacks on the WS-* family of specifications by advocates of simply using HTTP and XML together somehow reminds me of a bicycle company positioning their products against those of an armored truck manufacturer: "It's more agile in traffic, it gets an infinite number of miles per gallon of petroleum fuel, it's better for the environment, and it improves your health!" All true, of course, but completely irrelevant: you can't do the secure heavy hauling work that the truck is designed for with bicycles, except perhaps with extreme ingenuity and monumental inefficiency.
For example, a number of REST advocates are declaring victory because of the RESTfulness of the Bloglines web service APIs. This is a bit like Schwinn declaring victory whenever someone in a sunny climate buys a bicycle rather than an armored car to commute a mile to work. It's not all clear to me why anyone at Bloglines or elsewhere would even think of using the WS-* technologies to do this job. After all:
- The sources of information are generally known to the consumers by their URI, and can easily be discovered with mechanisms such as Google.
- The information being aggregated is public and not particularly sensitive, i.e. there would be little plausible benefit from attempting to steal or mis-attribute it.
- The information is already in XML form (or something very close to XML, net some misunderstandings of the finer points of the spec).
- It is already on the web, directly accessible via HTTP.
- The consumers of the API are experienced software developers, who presumably understand HTTP and XML natively and don't need a hand-holding "import a description of the API and generate all the code" tool to get the job done.
So, this is a job for which HTTP+XML is very well suited, and for which WS-* technologies add very little potential value. As a loyal Bloglines "customer" [just what is their business model anyway????] I'm happy to see them use the right technologies for this job. One might contrast this to Google, which drank deeply from the web services koolaid pitcher when developing its own Web API a couple of years ago and doesn't seem to have created much real value with them.
Let's consider, however, another scenario in which some business or agency needs to present an integrated view of diverse information sources, but has additional requirements:
- New sources of information must be discovered in real time with minimal human intervention.
- The information being aggregated is sensitive and confidential, so must be encrypted, digitally signed, and contain access control assertions.
- The systems from which information is gathered include mainframes, and some communicate with the outside using industrial strength middleware such as IBM's MQ technology or Software AG's EntireX rather than HTTP.
- Many of the communications links involve multiple hops, some of which go over intrinsically unreliable networks.
- The effective address of some information sources is discovered dynamically as part of a multipart interaction rather than via a published URI, and that address needs to be
- The information needs to be incorporated into a variety of off-the-shelf client programs by people with only basic programming skills.
OK all you folks who love to point out that HTTP+XML meet the needs of 99% of your customers, how do you meet the needs of the very real organizations that do have these kinds of requirements? Chances are you'll end up inventing something that is at least as complex as the WS-* stack, and you'll not get the benefit of the dozens of person-years of discussions about trials, errors, pitfalls, nitpicks, clarifications, and so forth that have gone into these specs. Nor will you get the benefits of the network effect from all the real products that support WS-* by rolling your own solution to these requirements.
I have no stake in the WS-* stuff -- it was developed outside the W3C system in which I spent a lot of time for the last several years, and I haven't been to any of the review workshops. Nor will I argue that WS-* will give market success to those who have bet on it, or that it hits the 80/20 point in the tradeoff between functionality and complexity. All I will argue is that a) it is designed to address real problems faced by real organizations who have tried to integrate enterprise systems with Web technology, and b) there is an immense amount of intelligence, thought, and real experience distilled into these things that needs to be given a lot of respect.
If people in the REST camp want to actually change minds rather than exchange virtual pats on the back in their echo chamber, they'll have to explain how to do the hard things RESTfully, not belabor the pointlessness of doing easy things with WS-*.
It's a nice day, maybe I'll ride my bike to the public library at lunch .... but that doesn't mean you shouldn't use an armored car if you need to haul cash around Newark on a rainy night.