I've been involved with the Web Services versus REST debate in one way or another for the best part of 8 years now. Having also been involved with various standards activities in the area for just as long and also having developed applications using both approaches, it's with some level of experience and understanding that I'm still proud to call myself a fence sitter. I also belong to a silent majority of people who simply don't get involved with these SOAP vs REST (or SOA versus REST) debates as often as the vocal minority: I don't know about others, but I simply don't have the time! However, a couple of things happened recently that pushed me into writing this. The first is that JJ asked me to co-author some work in this space to try to help settle the discussion (at least in some respects) and the second was editing the InfoQ piece on what Ganesh had said.
I agree broadly with Ganesh and have been saying the same things for years. When discussing MEST with Jim and Savas in its early years, we covered the same ground: distributed computing practitioners have been doing this work years. I believe that's why they eventually clarified that MEST isn't necessarily anything new, but a term to cover an architectural approach that (some) people in the industry (and academia) have been using. I don't actually care what we call it: MEST, message-oriented, message-based, Nirvana, as long as there's something we can point to and agree about, that has many years of good practice use cases behind it.
I've been developing distributed systems (small and large scale [physical remoteness of participants and number of participants]) for over 20 years. I pre-date Sun RPC, for instance, going back to a time when TCP/IP wasn't the default way in which to build systems. (My first main development effort was collaborating on the Rajdoot RPC mechanism.) I still think UDP has much more to offer than TCP, which is a good general protocol for reliable delivery of messages; but if you know the specifics of your application and distributed environment, it's often better (easier, more efficient, faster) to build something on UDP. But I digress.
If you look at distributed computing (it doesn't even have to be the Internet), it's all about message passing at some level: even the dreaded RPC is simply an abstraction of two correlated messages. In the beginning that's all you had: low level message passing primitives and you encoded the information you wanted to convey in the message somewhere (since you were probably only talking to endpoints you had developed, it was easy to get agreement on the payload format - they did what you wanted!) But this was a pretty cumbersome and manual process, making large scale distributed systems development a slow, error prone process. Then someone had the bright idea to take a high-level programming language abstraction and layer it on to this: RPC was born. The fact that multi-threaded processes and operating systems were at least a decade away had meant that most message passing implementations were synchronous anyway, so RPC was an abstraction that fit with best practices. RPC started to constrain the more open (general) interface of send-message(blob)/receive-message(blob), trading this off for ease of use. When object-oriented programming became the standard, distributed object technologies with their own versions of client/server stub generators took off. These didn't constrain the interface any more than RPC did, but they were a logical extension of the paradigm.
The "problem" with RPC (and distributed objects) is precisely that it constrains how you can (or can't) change your implementation with free abandon. The client and server stubs (the code that marshals and unmarshals parameters and opcodes and calls down to the network or up to the implementation object respectively) is closely tied to the object interface: change the interface and you must change the stubs. Requiring changes to the stubs in a closely coupled, limited distributed system is possible, but as you extend the size (range, number of objects) of that distribution it becomes difficult, if not impossible, to ensure that all users will get the new code. With a more generic interface you can modify the backend implementation (within reason) without having to regenerate the stubs. However, the problem of marshaling and unmarshaling still remains: ultimately something needs to call something concrete in order to do the work requested and somewhere there needs to be some agreement about where in the message the parameters and opcode reside to make sure that the right unit of work is performed. (The discussion about how this pushes the contract between endpoints into the message and not into the service interface is something for another day.)
If we look at the OMG's Activity Service for example (an attempt at a generic/loosely coupled [and hence more extensible] transactional infrastructure), the participants are all implementations of the CORBA Action interface that has a single method, processSignal (you won't find a prepare, commit or rollback method signature anywhere). The parameter to processSignal is a Signal, which is essentially a CORBA any: anything can be encoded within it of arbitrary complexity or simplicity. Therefore Action participants can change without affecting the sender code directly (in theory!) But how does this affect the ultimate application? Since it is working in terms of received Signals which have any information encoded within them, it is now very similar to the original low-level TCP/IP receiver/dispatcher code: although the low-level infrastructure does not change if the Action implementations change, the application developer (or in this case the Activity Service user) must become responsible for encoding and decoding messages received and acting on them in the same way as before, i.e., dispatching to methods, procedures or whatever based on the content of the Signal.
At the low-level, messages (Signals) can carry any data, but higher up the stack the application developer is constraining the messages by imposing syntactic and semantic meaning on them (based on the contract that exists between sender and receiver): back to the opcodes and parameters. Therefore, at the developer’s level, changes to the implementation (the contract, the object implementation etc.) do affect the developer again: this can never be avoided since at some point you need the equivalent of a dispatching stub at some point if you want to do the work. The message-driven pattern simply moves the level affected by change up the stack, closer to the developer: in some cases that may well be the right place for decisions on that change to be made; in others it isn't. If you have the right tools to assist in the development of distributed systems based on this approach, then it's fine and can really help bring flexibility and extensibility to your systems. But without those tools, it can be a problem, particularly as you want to scale your systems beyond your own organisation (or even your own department!)
Now we all know that Web Services uses HTTP as a transport protocol. It's fair to say that this is a bastardisation of HTTP. I was at the first OMG meeting where the ideas behind SOAP were introduced and it was pretty evident (and admitted by some) that the reason for using HTTP was to tunnel through firewalls. This fact has probably been instrumental in limiting the bindings of SOAP, but also key to its adoption. Naturally enough RPC was the approach that pervaded Web Services development. That's because the tools were there (from distributed object systems) and it fit the applications and services that were being developed. Sure RPC is limiting as I mentioned before. But in the grand scheme of things it's hardly a great evil as some try to make out. Sometimes there are good reasons why you should use RPC. Don't let anyone dissuade you from that. But sometimes there are good reasons why you shouldn't. You need to look at what you're trying to accomplish and fit the right tool (abstraction in this case) to the right job. If it's RPC, then go for it! If you've done your homework about your needs and the assumptions made about your application, services and infrastructure, don't let someone who hasn't persuade you otherwise just because "the Web doesn't work that way". Let's remember the Million Flies Argument!
In general the way we've been evolving WS-* standards and specifications is away from RPC and back to a more message-oriented approach, with one-way message invocations, to facilitate loose coupling and the kinds of long-duration interactions we see on the Internet (I think one of the first specifications to really push this was WS-CAF). Correlation of these one-way messages is used to achieve request/response interactions (aka RPC). But this whole approach still constrains the interface: changing the backend implementation is only possible in a limited way. Yes, this has all sorts of other effects, such as the inability to utilise HTTP cacheing, but if I don't need that what's the problem? Maybe I can handle cacheing within the application anyway? Believe it or not, cacheing protocols did exist before the Web came on the scene! But this is not a black-or-white argument: the problems that exist because of the way in which Web Services use HTTP are important to some developers and we should not ignore them. But neither should we make them the central reason for not using Web Services.
But the REST protagonists (and let's make this clear, most of them are really talking about REST/HTTP) use the uniform interface and resource-oriented approach of the Web to show that it is superior to SOAP/HTTP. Well as I said earlier, I like REST and technically there is no reason we cannot do what is done in WS-* with it. But the Web does have its problems too. For example, broken links, the lack of orphan detection and elimination. Of course you can live with these deficiencies: we do that every day. But they force the developer into a mindset that could otherwise be simplified and improved. Now I'm not suggesting that WS-* would solve these issues either! I'm simply pointing out that it's not a done-deal with REST. But developing using REST does have some significant advantages over SOAP for certain types of application. And this has nothing to do with putting the human in the loop, i.e., the fact that most people interact with the Web through a browser has nothing to do with this: REST/HTTP is just as useful when there are no human tasks involved in the system.
So where does this leave us? I'm a fence sitter because I've never been someone who believes in one-size fits all. A good architect or developer needs to be open to all of the possibilities when tackling any problem. Approaches such as REST or Web Services should be seen as tools in your tool belt, to be used as and when necessary (although with enough force you could use a hammer to cut wood, that's not normally the tool you'd use!) I think the debate between REST and Web Services people has become too polarised and there is a lot of Emperor's New Clothes Syndrome going around. No one should be thinking that Web Services or REST are meant as a replacement for (all) pre-existing distributed system infrastructures. And you should definitely not be pressured into one approach or another! Have an open mind and match your requirements with the capabilities offered by each approach (and let's not rule out some of the older technologies like CORBA or DCOM, that still have things to offer). Certainly when I'm developing "Internet scale" applications, I'll look at all possible approaches and choose the right one for the right job. Getting input from others, particularly based on their experiences, is always a good thing as well. But remember: your mileage may vary. What's right for one person/organisation may not be right for you. Don't follow the crowd because they are vocal: the emperor may be naked after all!
Thursday, December 27, 2007
Subscribe to:
Post Comments (Atom)
7 comments:
Wow - had a nice quiet Christmas?
I think "message passing" is beyond even distributed computing, Alan Kay originally intended OO to be all about message passing in smalltalk.
Perhaps the CORBA or DCOM style distributed objects came from taking that idea (perhaps) too far.
Mark,
I'm glad to see this issue becoming important enough to discuss again. I was influenced in my thinking by your old friend Jim Webber, who opened my eyes, so to speak, to the possibilities of the messaging paradigm during a consulting engagement with my employer.
I would add one piece to your comments on RPC. It's not just a benign "constrained correlation between two messages", as it seems to appear from your post. The philosophical underpinning of RPC is the idea that remote objects can be made to look local. This is a fallacy and an unachievable goal, because we run up against the pass-by-value/ pass-by-reference differences that cannot be bridged.
The philosophical underpinning of messaging, on the other hand, is that _all_ systems are made to appear remote, and only pass-by-value semantics are supported. Although it may seem to be wasteful in the case of local access, it is in fact more rigorous in terms of system integrity.
That's the real reason why RPC is "evil", and I blogged about that at some length :-).
Stefan, I think the REST (or WS-*) gods are angered: they killed my disc just after I posted ;-)
Mic, yes I agree. My first foray into MP was with Smalltalk-80. But I tried not to digress too much in the post ;-)
Ganesh, there are a lot worse evils in the world of computer "science" than RPC. I spent a lot of time working on various systems back in the 80's and 90's that were all based successfully on RPC. (As I said, I also helped implement RPCs so I do know quite a bit about them, thanks!) Don't look at CORBA or DCOM as shining examples of what's possible either. Distribution transparency is good for many, but distribution opacity is not always bad either (and yes, you can do some interesting things with pass-by-value as well as pass-by-reference to make them appear local too - read up on the literature).
But the important thing is that you should use the right abstraction at the right level for your project and needs. Don't over use these things, and definitely don't use them where they weren't intended.
RPC is a simplification. As with most simplifications (and abstractions) it comes with a certain set of assumptions (rules if you like): if you can live with them, then it can help. If you can't, then don't use it. But don't complain that it doesn't work in areas where those assumptions are silently ignored.
Your comment about message passing is wrong too. As Mic pointed out, it's been around a long time and had nothing to do with the remoteness of participants. Just because you're using a system that is based entirely on message passing does not suddenly give you more rigorous applications either. It takes a heck of a lot more than RPC, message-passing etc. to do that!
A couple of final notes: it's entirely possible to have an RPC system that still makes it clear that there are distributed entities around (again, look at the literature please). And message passing does not constrain you to pass-by-value semantics either! If you've got a product that does that, then fine: it's an implementation issue. But that's all it is.
Hmm. Good stuff! One does wonder what happened to the "...and SOA:..." bit noted in the title:-)
Perhaps we should all have another go at defining what we believe SOA is and leave the "debaters" to it?
A lot of the writing at present is concentrating on "future" projects, using all sorts of fancy new languages and technologies.
Fact is, though, that at least 70% of the world's code is still in COBOL and, the amount of it is _growing_.
A hobby horse of mine? Probably, partly due to having been in IT for quite so long; also due to working with clients that have tons of this code and no time, money, let alone people, to go and write it all again in one fail swoop!
Cheers, and all the best for 2008!
John
Many good points.
Just a couple of remarks:
> "Now we all know that Web Services uses HTTP as a transport protocol."
When there is a need for firewall crossing, HTTP is the only practical transport option. Otherwise, Web Service messages can be transported using TCP or even UDP or any other transport protocol.
> "Maybe I can handle cacheing within the application anyway?"
> "And this has nothing to do with putting the human in the loop, i.e., the fact that most people interact with the Web through a browser has nothing to do with this: REST/HTTP is just as useful when there are no human tasks involved in the system."
There are a few things you might do when there is a human in the loop that make little sense in a pure machine-to-machine interaction scenario. Cacheing is one such thing. When an application responsible for monitoring the status of some remote entity sends a GetStatus message, it expects to obtain the current status, not the one that was cached three days ago.
Post a Comment