I started to write this as a comment to Greg's posting, but it got too long.
I think Greg still misunderstood me, though looking back at my posting I can understand why: just enough detail to confuse and not enough to clarify. Oh well, I was rushed.
First, the notion of a root coordinator isn't present in the WS-BP model at all (most certainly NOT OASIS BTP). The WS-BP approach leverages some of the JFDI (REST-based) transaction work we were doing in HP where once again there wasn't a global coordinator. It was much more akin to the weakly consistent replication models that use a "gossip" approach and no single (centralized) consistency manager, rather than the strongly consistent replication protocols that do use algorithms based on a single coordinator. Same reasons: it doesn't scale (number of participants as well as physical locality), it doesn't perform and it isn't working with the application/user (sometimes taking advantage of the application semantics can make it more efficient to implement a good replication protocol, particularly when you look at recovery). That's why I hinted that the transactions crowd can learn from the replication crowd.
As I think I said in the original (original) post (and during my keynote at DOA 2007): there's not necessarily a single coordinator; there will be "domains" that may have coordinators that drive participants within them (but that'll be implementation specific and hidden behind the "service" endpoint), and how these domains are pulled together into a global "transaction" will not necessarily be through a single coordinator at all. There may be a single coordinator to kick start any interactions, but that role could even be taken by the application. Semantic information about the application/service/specific interaction needs to be "injected" into this model.
Global coordination is definitely out. But that doesn't mean that at some point the state of the system will not be such that an external observer could not tell the difference between when one was used and when one was not used (ignoring timing constraints). As I said in the DOA keynote, it's a bit like Heisenberg's Uncertainty Principle at work: you can tell the state of the participants in the business "transaction" (interaction) but not when that state will appear, or you can look at the participant states at the exact same time but not see the same "values". Yes, the analogy breaks down under closer scrutiny, but it's a nice way to try to illustrate the differences and begin the discussion proper ;-)
If we ever get round to updating our book I can write an entire chapter around this and explain it oh so much better with diagrams. Oh and as usual: one size doesn't fit all (which makes this discussion harder to have in a blog!)
Sunday, December 30, 2007
Friday, December 28, 2007
Oh no, not again!
There have been only two occasions when my Mac has let me down badly: the first was last Christmas when the disc died. The second was (is!) yesterday, when the disc died again. I backed up 2 weeks ago, but I'm still not happy. So if you're after responses to emails, blog posts etc. you'll have to get in line and wait until I have a replacement. I think I'm going to go bang my head against a brick wall for a bit!
Thursday, December 27, 2007
REST, SOAP, WS-* and SOA: Oh My!
I've been involved with the Web Services versus REST debate in one way or another for the best part of 8 years now. Having also been involved with various standards activities in the area for just as long and also having developed applications using both approaches, it's with some level of experience and understanding that I'm still proud to call myself a fence sitter. I also belong to a silent majority of people who simply don't get involved with these SOAP vs REST (or SOA versus REST) debates as often as the vocal minority: I don't know about others, but I simply don't have the time! However, a couple of things happened recently that pushed me into writing this. The first is that JJ asked me to co-author some work in this space to try to help settle the discussion (at least in some respects) and the second was editing the InfoQ piece on what Ganesh had said.
I agree broadly with Ganesh and have been saying the same things for years. When discussing MEST with Jim and Savas in its early years, we covered the same ground: distributed computing practitioners have been doing this work years. I believe that's why they eventually clarified that MEST isn't necessarily anything new, but a term to cover an architectural approach that (some) people in the industry (and academia) have been using. I don't actually care what we call it: MEST, message-oriented, message-based, Nirvana, as long as there's something we can point to and agree about, that has many years of good practice use cases behind it.
I've been developing distributed systems (small and large scale [physical remoteness of participants and number of participants]) for over 20 years. I pre-date Sun RPC, for instance, going back to a time when TCP/IP wasn't the default way in which to build systems. (My first main development effort was collaborating on the Rajdoot RPC mechanism.) I still think UDP has much more to offer than TCP, which is a good general protocol for reliable delivery of messages; but if you know the specifics of your application and distributed environment, it's often better (easier, more efficient, faster) to build something on UDP. But I digress.
If you look at distributed computing (it doesn't even have to be the Internet), it's all about message passing at some level: even the dreaded RPC is simply an abstraction of two correlated messages. In the beginning that's all you had: low level message passing primitives and you encoded the information you wanted to convey in the message somewhere (since you were probably only talking to endpoints you had developed, it was easy to get agreement on the payload format - they did what you wanted!) But this was a pretty cumbersome and manual process, making large scale distributed systems development a slow, error prone process. Then someone had the bright idea to take a high-level programming language abstraction and layer it on to this: RPC was born. The fact that multi-threaded processes and operating systems were at least a decade away had meant that most message passing implementations were synchronous anyway, so RPC was an abstraction that fit with best practices. RPC started to constrain the more open (general) interface of send-message(blob)/receive-message(blob), trading this off for ease of use. When object-oriented programming became the standard, distributed object technologies with their own versions of client/server stub generators took off. These didn't constrain the interface any more than RPC did, but they were a logical extension of the paradigm.
The "problem" with RPC (and distributed objects) is precisely that it constrains how you can (or can't) change your implementation with free abandon. The client and server stubs (the code that marshals and unmarshals parameters and opcodes and calls down to the network or up to the implementation object respectively) is closely tied to the object interface: change the interface and you must change the stubs. Requiring changes to the stubs in a closely coupled, limited distributed system is possible, but as you extend the size (range, number of objects) of that distribution it becomes difficult, if not impossible, to ensure that all users will get the new code. With a more generic interface you can modify the backend implementation (within reason) without having to regenerate the stubs. However, the problem of marshaling and unmarshaling still remains: ultimately something needs to call something concrete in order to do the work requested and somewhere there needs to be some agreement about where in the message the parameters and opcode reside to make sure that the right unit of work is performed. (The discussion about how this pushes the contract between endpoints into the message and not into the service interface is something for another day.)
If we look at the OMG's Activity Service for example (an attempt at a generic/loosely coupled [and hence more extensible] transactional infrastructure), the participants are all implementations of the CORBA Action interface that has a single method, processSignal (you won't find a prepare, commit or rollback method signature anywhere). The parameter to processSignal is a Signal, which is essentially a CORBA any: anything can be encoded within it of arbitrary complexity or simplicity. Therefore Action participants can change without affecting the sender code directly (in theory!) But how does this affect the ultimate application? Since it is working in terms of received Signals which have any information encoded within them, it is now very similar to the original low-level TCP/IP receiver/dispatcher code: although the low-level infrastructure does not change if the Action implementations change, the application developer (or in this case the Activity Service user) must become responsible for encoding and decoding messages received and acting on them in the same way as before, i.e., dispatching to methods, procedures or whatever based on the content of the Signal.
At the low-level, messages (Signals) can carry any data, but higher up the stack the application developer is constraining the messages by imposing syntactic and semantic meaning on them (based on the contract that exists between sender and receiver): back to the opcodes and parameters. Therefore, at the developer’s level, changes to the implementation (the contract, the object implementation etc.) do affect the developer again: this can never be avoided since at some point you need the equivalent of a dispatching stub at some point if you want to do the work. The message-driven pattern simply moves the level affected by change up the stack, closer to the developer: in some cases that may well be the right place for decisions on that change to be made; in others it isn't. If you have the right tools to assist in the development of distributed systems based on this approach, then it's fine and can really help bring flexibility and extensibility to your systems. But without those tools, it can be a problem, particularly as you want to scale your systems beyond your own organisation (or even your own department!)
Now we all know that Web Services uses HTTP as a transport protocol. It's fair to say that this is a bastardisation of HTTP. I was at the first OMG meeting where the ideas behind SOAP were introduced and it was pretty evident (and admitted by some) that the reason for using HTTP was to tunnel through firewalls. This fact has probably been instrumental in limiting the bindings of SOAP, but also key to its adoption. Naturally enough RPC was the approach that pervaded Web Services development. That's because the tools were there (from distributed object systems) and it fit the applications and services that were being developed. Sure RPC is limiting as I mentioned before. But in the grand scheme of things it's hardly a great evil as some try to make out. Sometimes there are good reasons why you should use RPC. Don't let anyone dissuade you from that. But sometimes there are good reasons why you shouldn't. You need to look at what you're trying to accomplish and fit the right tool (abstraction in this case) to the right job. If it's RPC, then go for it! If you've done your homework about your needs and the assumptions made about your application, services and infrastructure, don't let someone who hasn't persuade you otherwise just because "the Web doesn't work that way". Let's remember the Million Flies Argument!
In general the way we've been evolving WS-* standards and specifications is away from RPC and back to a more message-oriented approach, with one-way message invocations, to facilitate loose coupling and the kinds of long-duration interactions we see on the Internet (I think one of the first specifications to really push this was WS-CAF). Correlation of these one-way messages is used to achieve request/response interactions (aka RPC). But this whole approach still constrains the interface: changing the backend implementation is only possible in a limited way. Yes, this has all sorts of other effects, such as the inability to utilise HTTP cacheing, but if I don't need that what's the problem? Maybe I can handle cacheing within the application anyway? Believe it or not, cacheing protocols did exist before the Web came on the scene! But this is not a black-or-white argument: the problems that exist because of the way in which Web Services use HTTP are important to some developers and we should not ignore them. But neither should we make them the central reason for not using Web Services.
But the REST protagonists (and let's make this clear, most of them are really talking about REST/HTTP) use the uniform interface and resource-oriented approach of the Web to show that it is superior to SOAP/HTTP. Well as I said earlier, I like REST and technically there is no reason we cannot do what is done in WS-* with it. But the Web does have its problems too. For example, broken links, the lack of orphan detection and elimination. Of course you can live with these deficiencies: we do that every day. But they force the developer into a mindset that could otherwise be simplified and improved. Now I'm not suggesting that WS-* would solve these issues either! I'm simply pointing out that it's not a done-deal with REST. But developing using REST does have some significant advantages over SOAP for certain types of application. And this has nothing to do with putting the human in the loop, i.e., the fact that most people interact with the Web through a browser has nothing to do with this: REST/HTTP is just as useful when there are no human tasks involved in the system.
So where does this leave us? I'm a fence sitter because I've never been someone who believes in one-size fits all. A good architect or developer needs to be open to all of the possibilities when tackling any problem. Approaches such as REST or Web Services should be seen as tools in your tool belt, to be used as and when necessary (although with enough force you could use a hammer to cut wood, that's not normally the tool you'd use!) I think the debate between REST and Web Services people has become too polarised and there is a lot of Emperor's New Clothes Syndrome going around. No one should be thinking that Web Services or REST are meant as a replacement for (all) pre-existing distributed system infrastructures. And you should definitely not be pressured into one approach or another! Have an open mind and match your requirements with the capabilities offered by each approach (and let's not rule out some of the older technologies like CORBA or DCOM, that still have things to offer). Certainly when I'm developing "Internet scale" applications, I'll look at all possible approaches and choose the right one for the right job. Getting input from others, particularly based on their experiences, is always a good thing as well. But remember: your mileage may vary. What's right for one person/organisation may not be right for you. Don't follow the crowd because they are vocal: the emperor may be naked after all!
I agree broadly with Ganesh and have been saying the same things for years. When discussing MEST with Jim and Savas in its early years, we covered the same ground: distributed computing practitioners have been doing this work years. I believe that's why they eventually clarified that MEST isn't necessarily anything new, but a term to cover an architectural approach that (some) people in the industry (and academia) have been using. I don't actually care what we call it: MEST, message-oriented, message-based, Nirvana, as long as there's something we can point to and agree about, that has many years of good practice use cases behind it.
I've been developing distributed systems (small and large scale [physical remoteness of participants and number of participants]) for over 20 years. I pre-date Sun RPC, for instance, going back to a time when TCP/IP wasn't the default way in which to build systems. (My first main development effort was collaborating on the Rajdoot RPC mechanism.) I still think UDP has much more to offer than TCP, which is a good general protocol for reliable delivery of messages; but if you know the specifics of your application and distributed environment, it's often better (easier, more efficient, faster) to build something on UDP. But I digress.
If you look at distributed computing (it doesn't even have to be the Internet), it's all about message passing at some level: even the dreaded RPC is simply an abstraction of two correlated messages. In the beginning that's all you had: low level message passing primitives and you encoded the information you wanted to convey in the message somewhere (since you were probably only talking to endpoints you had developed, it was easy to get agreement on the payload format - they did what you wanted!) But this was a pretty cumbersome and manual process, making large scale distributed systems development a slow, error prone process. Then someone had the bright idea to take a high-level programming language abstraction and layer it on to this: RPC was born. The fact that multi-threaded processes and operating systems were at least a decade away had meant that most message passing implementations were synchronous anyway, so RPC was an abstraction that fit with best practices. RPC started to constrain the more open (general) interface of send-message(blob)/receive-message(blob), trading this off for ease of use. When object-oriented programming became the standard, distributed object technologies with their own versions of client/server stub generators took off. These didn't constrain the interface any more than RPC did, but they were a logical extension of the paradigm.
The "problem" with RPC (and distributed objects) is precisely that it constrains how you can (or can't) change your implementation with free abandon. The client and server stubs (the code that marshals and unmarshals parameters and opcodes and calls down to the network or up to the implementation object respectively) is closely tied to the object interface: change the interface and you must change the stubs. Requiring changes to the stubs in a closely coupled, limited distributed system is possible, but as you extend the size (range, number of objects) of that distribution it becomes difficult, if not impossible, to ensure that all users will get the new code. With a more generic interface you can modify the backend implementation (within reason) without having to regenerate the stubs. However, the problem of marshaling and unmarshaling still remains: ultimately something needs to call something concrete in order to do the work requested and somewhere there needs to be some agreement about where in the message the parameters and opcode reside to make sure that the right unit of work is performed. (The discussion about how this pushes the contract between endpoints into the message and not into the service interface is something for another day.)
If we look at the OMG's Activity Service for example (an attempt at a generic/loosely coupled [and hence more extensible] transactional infrastructure), the participants are all implementations of the CORBA Action interface that has a single method, processSignal (you won't find a prepare, commit or rollback method signature anywhere). The parameter to processSignal is a Signal, which is essentially a CORBA any: anything can be encoded within it of arbitrary complexity or simplicity. Therefore Action participants can change without affecting the sender code directly (in theory!) But how does this affect the ultimate application? Since it is working in terms of received Signals which have any information encoded within them, it is now very similar to the original low-level TCP/IP receiver/dispatcher code: although the low-level infrastructure does not change if the Action implementations change, the application developer (or in this case the Activity Service user) must become responsible for encoding and decoding messages received and acting on them in the same way as before, i.e., dispatching to methods, procedures or whatever based on the content of the Signal.
At the low-level, messages (Signals) can carry any data, but higher up the stack the application developer is constraining the messages by imposing syntactic and semantic meaning on them (based on the contract that exists between sender and receiver): back to the opcodes and parameters. Therefore, at the developer’s level, changes to the implementation (the contract, the object implementation etc.) do affect the developer again: this can never be avoided since at some point you need the equivalent of a dispatching stub at some point if you want to do the work. The message-driven pattern simply moves the level affected by change up the stack, closer to the developer: in some cases that may well be the right place for decisions on that change to be made; in others it isn't. If you have the right tools to assist in the development of distributed systems based on this approach, then it's fine and can really help bring flexibility and extensibility to your systems. But without those tools, it can be a problem, particularly as you want to scale your systems beyond your own organisation (or even your own department!)
Now we all know that Web Services uses HTTP as a transport protocol. It's fair to say that this is a bastardisation of HTTP. I was at the first OMG meeting where the ideas behind SOAP were introduced and it was pretty evident (and admitted by some) that the reason for using HTTP was to tunnel through firewalls. This fact has probably been instrumental in limiting the bindings of SOAP, but also key to its adoption. Naturally enough RPC was the approach that pervaded Web Services development. That's because the tools were there (from distributed object systems) and it fit the applications and services that were being developed. Sure RPC is limiting as I mentioned before. But in the grand scheme of things it's hardly a great evil as some try to make out. Sometimes there are good reasons why you should use RPC. Don't let anyone dissuade you from that. But sometimes there are good reasons why you shouldn't. You need to look at what you're trying to accomplish and fit the right tool (abstraction in this case) to the right job. If it's RPC, then go for it! If you've done your homework about your needs and the assumptions made about your application, services and infrastructure, don't let someone who hasn't persuade you otherwise just because "the Web doesn't work that way". Let's remember the Million Flies Argument!
In general the way we've been evolving WS-* standards and specifications is away from RPC and back to a more message-oriented approach, with one-way message invocations, to facilitate loose coupling and the kinds of long-duration interactions we see on the Internet (I think one of the first specifications to really push this was WS-CAF). Correlation of these one-way messages is used to achieve request/response interactions (aka RPC). But this whole approach still constrains the interface: changing the backend implementation is only possible in a limited way. Yes, this has all sorts of other effects, such as the inability to utilise HTTP cacheing, but if I don't need that what's the problem? Maybe I can handle cacheing within the application anyway? Believe it or not, cacheing protocols did exist before the Web came on the scene! But this is not a black-or-white argument: the problems that exist because of the way in which Web Services use HTTP are important to some developers and we should not ignore them. But neither should we make them the central reason for not using Web Services.
But the REST protagonists (and let's make this clear, most of them are really talking about REST/HTTP) use the uniform interface and resource-oriented approach of the Web to show that it is superior to SOAP/HTTP. Well as I said earlier, I like REST and technically there is no reason we cannot do what is done in WS-* with it. But the Web does have its problems too. For example, broken links, the lack of orphan detection and elimination. Of course you can live with these deficiencies: we do that every day. But they force the developer into a mindset that could otherwise be simplified and improved. Now I'm not suggesting that WS-* would solve these issues either! I'm simply pointing out that it's not a done-deal with REST. But developing using REST does have some significant advantages over SOAP for certain types of application. And this has nothing to do with putting the human in the loop, i.e., the fact that most people interact with the Web through a browser has nothing to do with this: REST/HTTP is just as useful when there are no human tasks involved in the system.
So where does this leave us? I'm a fence sitter because I've never been someone who believes in one-size fits all. A good architect or developer needs to be open to all of the possibilities when tackling any problem. Approaches such as REST or Web Services should be seen as tools in your tool belt, to be used as and when necessary (although with enough force you could use a hammer to cut wood, that's not normally the tool you'd use!) I think the debate between REST and Web Services people has become too polarised and there is a lot of Emperor's New Clothes Syndrome going around. No one should be thinking that Web Services or REST are meant as a replacement for (all) pre-existing distributed system infrastructures. And you should definitely not be pressured into one approach or another! Have an open mind and match your requirements with the capabilities offered by each approach (and let's not rule out some of the older technologies like CORBA or DCOM, that still have things to offer). Certainly when I'm developing "Internet scale" applications, I'll look at all possible approaches and choose the right one for the right job. Getting input from others, particularly based on their experiences, is always a good thing as well. But remember: your mileage may vary. What's right for one person/organisation may not be right for you. Don't follow the crowd because they are vocal: the emperor may be naked after all!
Wednesday, December 12, 2007
Hmmm, Web 2.0 features on my blog
While reading my friend Greg's response to my recent posting on transactions and SOA (really on transactions and scale), I noticed that his posts were flavoured with Web 2.0 style labels. I didn't even realise our shared blogging system had been updated to support such a thing. DO'h. Yet another feature I'll have to get used to.
Anyway, I also realised that maybe my post wasn't explicit enough with regards to transaction futures, so here goes again. I don't see distributed ACID transactions having much of a future in large scale systems. I do think that something called a transaction coordinator, with an associated transaction model has an important role to play, though the semantics such models offer to the developer will be different (and not necessarily subtly different either). If you look at some of the extended transaction models that looked at years ago they do blur the distinction between what you might class as workflow and "transactions". But there's still a reliable coordinator in there that controls the state transitions and can "do the right thing" on failure and recovery.
OK, enough of this for now. I've got to go and present.
Anyway, I also realised that maybe my post wasn't explicit enough with regards to transaction futures, so here goes again. I don't see distributed ACID transactions having much of a future in large scale systems. I do think that something called a transaction coordinator, with an associated transaction model has an important role to play, though the semantics such models offer to the developer will be different (and not necessarily subtly different either). If you look at some of the extended transaction models that looked at years ago they do blur the distinction between what you might class as workflow and "transactions". But there's still a reliable coordinator in there that controls the state transitions and can "do the right thing" on failure and recovery.
OK, enough of this for now. I've got to go and present.
Friday, December 07, 2007
Large-scale distributed transactions
I've been working with transactions for quite a while and in the area of large-scale (numbers of participants, physical distance) since the original work on the Additional Structuring Mechanisms for the OTS (aka Activity Service). However, it wasn't until Web Services transactions, BTP, WS-CAF and WS-TX that theory started to get put into practice. We first started to talk about relaxing the ACID properties back with the CORBA Activity Service, but it was with the initial submissions to BTP that things started to be made more explicit and directly relevant.
Within the specifications/standards and associated papers or presentations, we made statements along the lines that isolation should be a back-end issue for services or the transaction model (remembering that one-size does not fit all). The notions of global consistency and global atomicity were relaxed by all of the standards. For instance, sometimes it is necessary to commit some participants in a transaction and roll back others (similar to what nested transactions would give us). Likewise, globally consistent updates and a globally consistent view of the transaction outcome have to be relaxed as you scale up and out.
Now I didn't find this as much of a leap of faith as some others, but I think that's because when I was doing my PhD I spent a lot of time working with weak consistency replication protocols. There's always been a close relationship between transactions and replication. Traditional replica consistency protocols are strongly consistent: all of the replicas are kept identical and this is fine for closely coupled groups, but it doesn't scale. Therefore, weak consistency replication protocols evolved in the 1980's and 1990s, where the states of replicas are allowed to diverge, either forever or for a defined period of time (see gossip protocols for some background). You trade of consistency for performance and availability. For many kinds of applications, this works really well.
It turns out that the same is true for transactions: in fact, it's necessary in Web Services if you want to glue together disparate services and domains, some of which may not be using the same transaction implementation behind the service boundary. I still think the best specification to illustrate this relaxation of the various properties is WS-BusinessProcess, part of WS-TransactionManager (OASIS WS-CAF). Although Eric and I came up with the original concept, we were never able to sell it to our co-authors on WS-TX (so far). I think one of our failings was to not write enough papers, articles or blogs about the benefits it offered and the practicalities it fit. However, every time I explained it to people in the field it was such an easy sell for them to understand how it fit into the Web Services world so much better than other approaches. (The original idea behind WS-BP came from some of the RESTful transactions work we did in HP, where it was code-named the JFDI-transaction implementation.)
I still find it a pleasant surprise that although our co-authors from Microsoft on WS-TX didn't get the reasons behind WS-BP, other friends and colleagues such as Pat Helland started to write about the necessity to relax transactionality. I like Pat's use of relativity to explain some of the problems. However, when I had to come and talk about what we'd been doing in the world of transactions for the past decade I thought Heisenberg's Uncertainty Principle was perhaps slightly better: you can either know the state that all participants will have, but not when; or vice versa.
Within the specifications/standards and associated papers or presentations, we made statements along the lines that isolation should be a back-end issue for services or the transaction model (remembering that one-size does not fit all). The notions of global consistency and global atomicity were relaxed by all of the standards. For instance, sometimes it is necessary to commit some participants in a transaction and roll back others (similar to what nested transactions would give us). Likewise, globally consistent updates and a globally consistent view of the transaction outcome have to be relaxed as you scale up and out.
Now I didn't find this as much of a leap of faith as some others, but I think that's because when I was doing my PhD I spent a lot of time working with weak consistency replication protocols. There's always been a close relationship between transactions and replication. Traditional replica consistency protocols are strongly consistent: all of the replicas are kept identical and this is fine for closely coupled groups, but it doesn't scale. Therefore, weak consistency replication protocols evolved in the 1980's and 1990s, where the states of replicas are allowed to diverge, either forever or for a defined period of time (see gossip protocols for some background). You trade of consistency for performance and availability. For many kinds of applications, this works really well.
It turns out that the same is true for transactions: in fact, it's necessary in Web Services if you want to glue together disparate services and domains, some of which may not be using the same transaction implementation behind the service boundary. I still think the best specification to illustrate this relaxation of the various properties is WS-BusinessProcess, part of WS-TransactionManager (OASIS WS-CAF). Although Eric and I came up with the original concept, we were never able to sell it to our co-authors on WS-TX (so far). I think one of our failings was to not write enough papers, articles or blogs about the benefits it offered and the practicalities it fit. However, every time I explained it to people in the field it was such an easy sell for them to understand how it fit into the Web Services world so much better than other approaches. (The original idea behind WS-BP came from some of the RESTful transactions work we did in HP, where it was code-named the JFDI-transaction implementation.)
I still find it a pleasant surprise that although our co-authors from Microsoft on WS-TX didn't get the reasons behind WS-BP, other friends and colleagues such as Pat Helland started to write about the necessity to relax transactionality. I like Pat's use of relativity to explain some of the problems. However, when I had to come and talk about what we'd been doing in the world of transactions for the past decade I thought Heisenberg's Uncertainty Principle was perhaps slightly better: you can either know the state that all participants will have, but not when; or vice versa.
Sunday, November 25, 2007
Close call!
I've given a few keynote speeches over the years and this year is no different: as well as YR-SOC earlier this year, I'm also giving the keynote at DOA 2007 in the Algarve. As usual I left writing the speech until the last minute (yesterday) and thought I'd done pretty well. Until, that is, I checked what I'd told the organisers several months ago that I would be speaking on: not quite what I'd gone and written about! I have no idea what caused me to think I was speaking on A when in fact it was Z, but hey, at least I checked before I stood up to talk! So it's back to the drawing board.
I'm here until Thursday, so I should be able to watch Don's keynote. We caught up at HPTS only a few weeks ago, but it's a relatively small community we live in so I'm no longer surprised at the number of times I meet friends/colleagues from the other side of the world.
I'm here until Thursday, so I should be able to watch Don's keynote. We caught up at HPTS only a few weeks ago, but it's a relatively small community we live in so I'm no longer surprised at the number of times I meet friends/colleagues from the other side of the world.
Sunday, November 18, 2007
Evolve or die
It's a shame when someone isn't allowed to change their opinions over time without a backlash. In many ways scientific ideas are a really good example of Darwinian Evolution: the ones that best fit the observations continue whereas the others fall by the wayside. But today's ideas and theories may not be relevant in the near or far future and may even be ridiculed by those who come after us. I know that I no longer subscribe to many of the ideas and beliefs I had 20 years ago and like to think that I'm continually updating my thoughts. To do otherwise encourages stagnation. Whether or not you agree with Steve, he shouldn't be vilified for re-evaluating his beliefs based on his experiences.
Friday, November 16, 2007
Way too busy for my own good
Lots of things have happened over the past few months that normally I'd blog about but simply haven't had a chance to owing to work load. So I thought maybe I'd jot down some notes here to at least remind myself that I need to expand on them later:
- I want to say something about Extreme Transaction Processing (yet another Gartner phrase I'm sure I'll love).
- Some of Pat's recent work has echoes of previous thoughts and I want to say a thing or two about it. In some ways this plays into Extreme TP too.
- RESTful transactions.
- HPTS 2007 and other conferences I've been too (and going to be at).
A tribute to Jim
Somehow I'm going to try to attend. Jim was important for many different reasons, as this tribute will no doubt show.
Sunday, November 04, 2007
Well worth a read
I'm currently working my way through Discarded Science, a wonderful journey through the evolution of our current ways of thinking and reasoning about life, the universe and everything. I'd come across many of the events described before, for example during my physics degree or reading copious scientific journal and watching programs over the years (e.g., the Hollow Earth and Flat Earth notions), but it's nice to have them all brought together in one easily digestible book. Even if your strong suit isn't science, this book is well worth a read.
I haven't finished the book yet, but I hope it makes the strong statement that science isn't about proving anything: it's about disproving the incumbent theories. I can't remember who said that first, but I'm sure I heard Feynman say it on more than one occassion.
I wonder how many of our current beliefs will appear in a 22nd century version?
I haven't finished the book yet, but I hope it makes the strong statement that science isn't about proving anything: it's about disproving the incumbent theories. I can't remember who said that first, but I'm sure I heard Feynman say it on more than one occassion.
I wonder how many of our current beliefs will appear in a 22nd century version?
Thursday, November 01, 2007
Friday, September 28, 2007
Sunday, September 23, 2007
Thursday, September 20, 2007
More from Bill
Bill responded to my response around his original posting. I'm not going to cut-and-paste everything here, but will concentrate on the main points again. I'll have to delete some of the text from Bill's blog, but hopefully I don't lose the context as a result:
"... I think the layers you have to have in sync are very different in size and complexity. While a REST over HTTP web service only really requires an HTTP client, HTTP web service, and maybe an XML parser, WS-* requires those plus all the WS-* libraries, plus any generated stubs. What I like about REST over HTTP you can actually tell the server what data format you’re sending (content-type) and what format you want back (accept)."
I think Bill's confusing implementations with what's defined by the standards. All WS-* does is state what flows on the wire, e.g., the format of a prepare call. It doesn't define interfaces per se (let's not get into the WSDL versus IDL debate) and really only defines the legal and standard ways in which endpoints can interact. How implementations choose to support this is outside the scope of all of the standards/specifications. Some do it well, some don't. In terms of REST, you still need some kind of programmer support when you think about it (what do you think is going on in the browser?)
Bill then looks at my point around defining the message formats so endpoints can communicate.
"Don’t you have to do the same thing with WS? You still have to agree on the message you are sending. What is the name of the remote method call you are invoking. What are its parameters. What is the data format of its parameters, and so on."
Yes, but that's precisely my point (remember, I said that there's no technical impediment to doing this - it's entirely political and monetary). The companies behind Web Services include IBM, BEA, Red Hat/JBoss, Oracle, IONA, TIBCO, webMethods/Software AG, HP etc. etc. etc. and they've spent the last 7 years defining these payloads and message exchange patterns. Sure it's complex, but when you have to do complex things the infrastructure becomes complex too, either explicitly or implicitly (hidden behind service endpoints). But my point is that it's done in Web Services and we have a set of agreed standards that allow us to do business over the intranet or internet with arbitrary vendors and an arbitrary number of those vendors. We don't need to sit down and argue what it means to associate a transaction policy with a service endpoint, or what it means to have a service fail in the middle of a business transaction. Those things are hard to agree on. Yes, we could do it all again around REST, but I don't think that's going to happen any time soon.
"I am officially in my late 30s. I did work through both the short-lived DCE and long-lived CORBA eras that touted the same story of interoperability. Sorry if I am skeptical and worried that the industry will yet again re-invent itself in another 5-10 years."
Same here, but add a few more years and distributed systems (yes, I feel old at times!) These things do go in cycles. I also have no doubt that Web Services are not the end result and we'll see more evolution and revolution to come. But at the moment Web Services have it. Plus, as I've said before, this is the only time since the start of the WWW that all of the major players have agreed on something like this. It never happened with CORBA. It isn't happening with REST.
"The idioms I described with jBPM use all three of these subsystems, but they do not require a remote protocol definition. As responsible engineers, we should be questioning the need for WS-*. The amount of investment is just too huge for any organization. There’s too much money and time to be lost."
I don't see it as black and white: this is precisely why I don't believe there is one good way of tackling everything. WS-* has its place. So has REST. But then again, a good ESB can help tie all of this together as well. As for the investment reference, I've done some of this in REST before and it was just as time consuming as WS-*. More so now when you have to fight against the WS-* wave of supporters (of which I am one, sitting on my fence).
"Why the need for BA if it is not going to do things automatically for you? If the line between your business process and your compensation engine is starting to blur, why not have your bpm and compensation engine be the same thing?"
BA can do things for you automatically. It just doesn't have to. Plus, don't think of a BA coordinator as being a separate (centralized?) service: all WS-BA defines is a protocol engine; where it sits is entirely up to the developer. So it could well be at the heart of a workflow system. One of the things I've been trying to show is that you still need a coordinator somewhere: something that remembers who and what needs to be done in a reliable manner. It doesn't have to have a narrow interface (commit or rollback), but can have complex ways in which its internal protocol is driven. That was one of the reasons behind the CORBA (J2EE) Activity Service. Hey, we have coordinator technology that'd let us do that!
"Not so sure this is true. If your content type is text/xml, you’ll probably have an XSD associated with it. If you use the design by contract to implement your web services that people like the Spring guys are pushing, you really have the same thing."
OK, so go and persuade Oracle, Amazon, Google, Adobe, Microsoft, TIBCO, company ABC, company XYZ etc. to agree to this and maybe we can start talking about a new global standard for information interchange.
"What I really want here is to have the client drive the application and simplify the complexity of the distributed protocol."
You should take a look at WS-CAF too BTW.
"This is why I said I really liked the idea of using BA with a web application. It would all be local and controllable. After reading up on BPEL compensation handlers, I also like the idea of using them with RESTful web services. Again, put all the responsibility on the client for managing the coordination. Then we don’t have a dual dependency on an in-sync client and server."
Well I still think you'll find that there's a coordinator hidden in there somewhere.
"... I think the layers you have to have in sync are very different in size and complexity. While a REST over HTTP web service only really requires an HTTP client, HTTP web service, and maybe an XML parser, WS-* requires those plus all the WS-* libraries, plus any generated stubs. What I like about REST over HTTP you can actually tell the server what data format you’re sending (content-type) and what format you want back (accept)."
I think Bill's confusing implementations with what's defined by the standards. All WS-* does is state what flows on the wire, e.g., the format of a prepare call. It doesn't define interfaces per se (let's not get into the WSDL versus IDL debate) and really only defines the legal and standard ways in which endpoints can interact. How implementations choose to support this is outside the scope of all of the standards/specifications. Some do it well, some don't. In terms of REST, you still need some kind of programmer support when you think about it (what do you think is going on in the browser?)
Bill then looks at my point around defining the message formats so endpoints can communicate.
"Don’t you have to do the same thing with WS? You still have to agree on the message you are sending. What is the name of the remote method call you are invoking. What are its parameters. What is the data format of its parameters, and so on."
Yes, but that's precisely my point (remember, I said that there's no technical impediment to doing this - it's entirely political and monetary). The companies behind Web Services include IBM, BEA, Red Hat/JBoss, Oracle, IONA, TIBCO, webMethods/Software AG, HP etc. etc. etc. and they've spent the last 7 years defining these payloads and message exchange patterns. Sure it's complex, but when you have to do complex things the infrastructure becomes complex too, either explicitly or implicitly (hidden behind service endpoints). But my point is that it's done in Web Services and we have a set of agreed standards that allow us to do business over the intranet or internet with arbitrary vendors and an arbitrary number of those vendors. We don't need to sit down and argue what it means to associate a transaction policy with a service endpoint, or what it means to have a service fail in the middle of a business transaction. Those things are hard to agree on. Yes, we could do it all again around REST, but I don't think that's going to happen any time soon.
"I am officially in my late 30s. I did work through both the short-lived DCE and long-lived CORBA eras that touted the same story of interoperability. Sorry if I am skeptical and worried that the industry will yet again re-invent itself in another 5-10 years."
Same here, but add a few more years and distributed systems (yes, I feel old at times!) These things do go in cycles. I also have no doubt that Web Services are not the end result and we'll see more evolution and revolution to come. But at the moment Web Services have it. Plus, as I've said before, this is the only time since the start of the WWW that all of the major players have agreed on something like this. It never happened with CORBA. It isn't happening with REST.
"The idioms I described with jBPM use all three of these subsystems, but they do not require a remote protocol definition. As responsible engineers, we should be questioning the need for WS-*. The amount of investment is just too huge for any organization. There’s too much money and time to be lost."
I don't see it as black and white: this is precisely why I don't believe there is one good way of tackling everything. WS-* has its place. So has REST. But then again, a good ESB can help tie all of this together as well. As for the investment reference, I've done some of this in REST before and it was just as time consuming as WS-*. More so now when you have to fight against the WS-* wave of supporters (of which I am one, sitting on my fence).
"Why the need for BA if it is not going to do things automatically for you? If the line between your business process and your compensation engine is starting to blur, why not have your bpm and compensation engine be the same thing?"
BA can do things for you automatically. It just doesn't have to. Plus, don't think of a BA coordinator as being a separate (centralized?) service: all WS-BA defines is a protocol engine; where it sits is entirely up to the developer. So it could well be at the heart of a workflow system. One of the things I've been trying to show is that you still need a coordinator somewhere: something that remembers who and what needs to be done in a reliable manner. It doesn't have to have a narrow interface (commit or rollback), but can have complex ways in which its internal protocol is driven. That was one of the reasons behind the CORBA (J2EE) Activity Service. Hey, we have coordinator technology that'd let us do that!
"Not so sure this is true. If your content type is text/xml, you’ll probably have an XSD associated with it. If you use the design by contract to implement your web services that people like the Spring guys are pushing, you really have the same thing."
OK, so go and persuade Oracle, Amazon, Google, Adobe, Microsoft, TIBCO, company ABC, company XYZ etc. to agree to this and maybe we can start talking about a new global standard for information interchange.
"What I really want here is to have the client drive the application and simplify the complexity of the distributed protocol."
You should take a look at WS-CAF too BTW.
"This is why I said I really liked the idea of using BA with a web application. It would all be local and controllable. After reading up on BPEL compensation handlers, I also like the idea of using them with RESTful web services. Again, put all the responsibility on the client for managing the coordination. Then we don’t have a dual dependency on an in-sync client and server."
Well I still think you'll find that there's a coordinator hidden in there somewhere.
Tuesday, September 18, 2007
Some comments around REST and compensations
Bill has written an interesting post which is essentially REST+compensations versus WS-*. While I agree with some of what he has to say (we did REST+compensations at HP back in 2000 - yes Jonathan, I'm still looking for the papers ;-), I have to disagree with several of his observations. Before I look at them in turn, it's best to remind everyone that I'm a "fence sitter" as Mark calls me: I do believe that REST and WS-* have their places and it's not an either/or situation. I sit on the fence for a number of reasons, but one of them is that I don't believe in the 'one-size fits all' argument.
Before I go on, it's important for me to mention that I don't want this to degenerate into a REST versus WS-* debate. In general I agree with the underlying concepts that Bill is discussing. I just know it's not as easy as it seems: it's not clear cut. But neither should you believe that Web Services are necessarily the answer to Life, The Universe and Everything.
Anyway, down to business. The first thing Bill mentions that I have to disagree with is around maintainability.
"One of my concerns is the complexity of the WS-* layer and how it affects maintainability and evolvability of a distributed system. WS-* requires both the client and server to have the necessary middleware components and libraries installed to work."
Yes, but that's the case for any distributed system: the endpoints have got to be able to understand each other or go through some intermediary that can do the protocol/data translation. The same is true for HTTP though.
"What I continually like about REST over HTTP is that there is little to no requirement on either the client or server side for additional libraries or generated stubs other than a http client library and a http server.
So let's assume we can get everyone to agree to use HTTP (not a big assumption, so I'm happy with this). Then we have to get them to agree to the protocol data, the format, representation, etc. that actually flows between the endpoints. It's no good someone saying "because we agree on PUT or GET that's the end of the story." That's a bit like saying that because we agree on using letters and envelopes to send mail around the world the recipients will be able to understand what's being sent! Sure, assuming everyone in the world has an address and a letter box, the mail will get through, but that's not good enough to reliably place an order for goods or send money etc.
Don't overlook what has been driving WS-* over the past few years: interoperability. Yes, we have interoperability on the WWW (ignoring the differences in HTML syntax and browsers). But we do not have interoperabilty for transactions, reliable messaging, workflow etc. That's not to say we can't do it: as I said before, we did manage to do REST+transactions in HP but it was in a small-scale deployment involving only a couple of partners. There is no technical impediment to doing this: it's entirely political. It can be done, I just don't see it ever being done. Until it happens, REST/HTTP cannot compete with the kinds of heterogeneous out-of-the-box interoperability that we have demonstrated with WS-*. And don't point me at Amazon (Werner and I are good friends and I know why they offer REST and WS-*): they're a single company and while they could work with partners to agree on formats etc. that doesn't scale. It's ad hoc.
Bill's next point was around flexibility. He mentions the ordering of compensations can sometimes (often?) be important and that you can't rely on ordering within WS-BA. Unfortunately that's not correct, as was pointed out several years ago. WS-BA works with scopes (similar to nested transactions) and you can control when and how different scopes go off, effectively controlling the order (and individual atomicity) of each scope. WS-BPEL started life using WS-BA for its compensation handlers and there's a lot of flexibility there. Obviously if the API that is layered on the WS-BA implementation does not support this richness, then that's a narrowing of the capabilities, but the underlying protocol is not to blame.
"While WS-BA puts the onus on both the client and server to have WS libraries that are of the same version and can interact with one another. This all works beautifully in controlled environments. You have a very nice decoupling between compensations and business logic in both client and server side code. But…. What happens when BA versions are out of sync, there’s interoperability problems, or even worse, one or more of the services you have to coordinate does not support WS-BA?"
I understand what Bill is on about, but I keep coming back to the same thing: how is this any different from getting REST developers to agree on the payload format, version etc? Just because something supports HTTP (or REST) does not mean it somehow magically understands every single PUT or GET request you send to it. This isn't a dig at Bill, but it is something that has annoyed me for the past 7 years: some REST advocates seem to think that just because the Web works on REST principles, if you use them to develop your own applications/services you'll benefit from them too. D'Oh! Why should that be the case? The Web works because the standards/specifications on which it is based were laid down back in the early 1990's and haven't really changed. The W3C manages them and everyone agreed to abide by them very early on; there's strong commercial incentive not to change things. That has nothing to do with REST and everything to do with protocol agreement: same thing has happened with WS-*. So just because I decide to use REST and HTTP doesn't mean I get instant portability and interoperability. Yes, I get interoperability at the low level, but it says nothing about interoperability at the payload. That's damn hard to achieve, as we've seen over the past 7+ years.
At least with WS-BA (and many of the WS-* specifications/standards) there's an agreed document (standard) that we can point to and test interoperability against. Yes it requires everyone to use it, but that's no different to telling everyone to use HTTP or understand English when reading my letters.
"You still have strong decoupling in a bpm driven activity. The difference is that instead of compensation being hidden by the server, it is visible, and in your face, embedded directly in the business process. With jBPM, you can look at the process diagram and know exactly the flow of success and failure conditions of your distributed application. With BA, you would have to dive into the documentation and code to determine how your application functions."
Yes, that's right, but no different to what you'd find with a good WS-BPEL implementation. WS-BA can be used stand-alone, but as I said before, it was originally developed to compliment WS-BPEL. If you look at the compensation handlers, for instance, you can see how WS-BA can play in that space. Now not all WS-BPEL implementations use WS-BA (political reasons removed the dependency on the specification back in 2003), but some do and do it well, giving you all of the capabilities mentioned above.
"Transactions that have heuristic outcomes many times require human intervention as well as action by the framework itself. Yes, there’s a lot a transaction manager could do to make it easier to record business activities, but you’re still going to have interact with a real person to resolve many issues."
Agreed, but as I said in the original blog entry, sometimes dealing with a problem there and then can save you a lot of time, be more efficient, reliable and fault-tolerant. Since the transaction coordinator records the necessary compensation information, it can try to deal with the compensation immediately (and reliably), based on the information provided to it at enlistment time. In cases where the backend system/service cannot/will not expose compensation capability to the user (there are organisations that do not do this for a number of reasons, including security) then self-compensation followed by sys-admin logging/tasking (which could be done via something like jBPM) is the only option.
So what kind of conclusions can you draw from this? First, I like the idea of REST+X, where X could be transactions, reliable messaging, workflow etc. I wish it could be used in the large, but for the forseeable future it'll be for smaller scale (number of vendors, not physical distribution) applications. Web Services are beneficial, particularly if used right. But as with all technologies, they can be misused. Also, not all things are meant to be used in isolation (WS-BA). Finally, you can't get away from needing a coordinator in these situations: you may hide it, but if you need guaranteed coordination in the presence of failures then you really can't get away from something like a transaction coordinator. How I miss the CORBA Activity Service: it helps unify all of this stuff!
Before I go on, it's important for me to mention that I don't want this to degenerate into a REST versus WS-* debate. In general I agree with the underlying concepts that Bill is discussing. I just know it's not as easy as it seems: it's not clear cut. But neither should you believe that Web Services are necessarily the answer to Life, The Universe and Everything.
Anyway, down to business. The first thing Bill mentions that I have to disagree with is around maintainability.
"One of my concerns is the complexity of the WS-* layer and how it affects maintainability and evolvability of a distributed system. WS-* requires both the client and server to have the necessary middleware components and libraries installed to work."
Yes, but that's the case for any distributed system: the endpoints have got to be able to understand each other or go through some intermediary that can do the protocol/data translation. The same is true for HTTP though.
"What I continually like about REST over HTTP is that there is little to no requirement on either the client or server side for additional libraries or generated stubs other than a http client library and a http server.
So let's assume we can get everyone to agree to use HTTP (not a big assumption, so I'm happy with this). Then we have to get them to agree to the protocol data, the format, representation, etc. that actually flows between the endpoints. It's no good someone saying "because we agree on PUT or GET that's the end of the story." That's a bit like saying that because we agree on using letters and envelopes to send mail around the world the recipients will be able to understand what's being sent! Sure, assuming everyone in the world has an address and a letter box, the mail will get through, but that's not good enough to reliably place an order for goods or send money etc.
Don't overlook what has been driving WS-* over the past few years: interoperability. Yes, we have interoperability on the WWW (ignoring the differences in HTML syntax and browsers). But we do not have interoperabilty for transactions, reliable messaging, workflow etc. That's not to say we can't do it: as I said before, we did manage to do REST+transactions in HP but it was in a small-scale deployment involving only a couple of partners. There is no technical impediment to doing this: it's entirely political. It can be done, I just don't see it ever being done. Until it happens, REST/HTTP cannot compete with the kinds of heterogeneous out-of-the-box interoperability that we have demonstrated with WS-*. And don't point me at Amazon (Werner and I are good friends and I know why they offer REST and WS-*): they're a single company and while they could work with partners to agree on formats etc. that doesn't scale. It's ad hoc.
Bill's next point was around flexibility. He mentions the ordering of compensations can sometimes (often?) be important and that you can't rely on ordering within WS-BA. Unfortunately that's not correct, as was pointed out several years ago. WS-BA works with scopes (similar to nested transactions) and you can control when and how different scopes go off, effectively controlling the order (and individual atomicity) of each scope. WS-BPEL started life using WS-BA for its compensation handlers and there's a lot of flexibility there. Obviously if the API that is layered on the WS-BA implementation does not support this richness, then that's a narrowing of the capabilities, but the underlying protocol is not to blame.
"While WS-BA puts the onus on both the client and server to have WS libraries that are of the same version and can interact with one another. This all works beautifully in controlled environments. You have a very nice decoupling between compensations and business logic in both client and server side code. But…. What happens when BA versions are out of sync, there’s interoperability problems, or even worse, one or more of the services you have to coordinate does not support WS-BA?"
I understand what Bill is on about, but I keep coming back to the same thing: how is this any different from getting REST developers to agree on the payload format, version etc? Just because something supports HTTP (or REST) does not mean it somehow magically understands every single PUT or GET request you send to it. This isn't a dig at Bill, but it is something that has annoyed me for the past 7 years: some REST advocates seem to think that just because the Web works on REST principles, if you use them to develop your own applications/services you'll benefit from them too. D'Oh! Why should that be the case? The Web works because the standards/specifications on which it is based were laid down back in the early 1990's and haven't really changed. The W3C manages them and everyone agreed to abide by them very early on; there's strong commercial incentive not to change things. That has nothing to do with REST and everything to do with protocol agreement: same thing has happened with WS-*. So just because I decide to use REST and HTTP doesn't mean I get instant portability and interoperability. Yes, I get interoperability at the low level, but it says nothing about interoperability at the payload. That's damn hard to achieve, as we've seen over the past 7+ years.
At least with WS-BA (and many of the WS-* specifications/standards) there's an agreed document (standard) that we can point to and test interoperability against. Yes it requires everyone to use it, but that's no different to telling everyone to use HTTP or understand English when reading my letters.
"You still have strong decoupling in a bpm driven activity. The difference is that instead of compensation being hidden by the server, it is visible, and in your face, embedded directly in the business process. With jBPM, you can look at the process diagram and know exactly the flow of success and failure conditions of your distributed application. With BA, you would have to dive into the documentation and code to determine how your application functions."
Yes, that's right, but no different to what you'd find with a good WS-BPEL implementation. WS-BA can be used stand-alone, but as I said before, it was originally developed to compliment WS-BPEL. If you look at the compensation handlers, for instance, you can see how WS-BA can play in that space. Now not all WS-BPEL implementations use WS-BA (political reasons removed the dependency on the specification back in 2003), but some do and do it well, giving you all of the capabilities mentioned above.
"Transactions that have heuristic outcomes many times require human intervention as well as action by the framework itself. Yes, there’s a lot a transaction manager could do to make it easier to record business activities, but you’re still going to have interact with a real person to resolve many issues."
Agreed, but as I said in the original blog entry, sometimes dealing with a problem there and then can save you a lot of time, be more efficient, reliable and fault-tolerant. Since the transaction coordinator records the necessary compensation information, it can try to deal with the compensation immediately (and reliably), based on the information provided to it at enlistment time. In cases where the backend system/service cannot/will not expose compensation capability to the user (there are organisations that do not do this for a number of reasons, including security) then self-compensation followed by sys-admin logging/tasking (which could be done via something like jBPM) is the only option.
So what kind of conclusions can you draw from this? First, I like the idea of REST+X, where X could be transactions, reliable messaging, workflow etc. I wish it could be used in the large, but for the forseeable future it'll be for smaller scale (number of vendors, not physical distribution) applications. Web Services are beneficial, particularly if used right. But as with all technologies, they can be misused. Also, not all things are meant to be used in isolation (WS-BA). Finally, you can't get away from needing a coordinator in these situations: you may hide it, but if you need guaranteed coordination in the presence of failures then you really can't get away from something like a transaction coordinator. How I miss the CORBA Activity Service: it helps unify all of this stuff!
Saturday, September 15, 2007
Projects
I need a project to occupy my spare time ("spare time" - kind of a laugh).
Many years ago (towards the end of the 1980's) whilst working on our PhDs, Dan McCue and I developed a SIMULA-like simulation package in C++, cunningly called C++SIM. It was all tied up in the work we were doing at the time around replica placement to achieve high availability. It quickly took on a life of its own and we got requests to use it from a number of academic and commercial organisations. When Java came along it seemed like a good idea to port it across and JavaSim was born. (That "seemed like a good idea" principle was also what lead to JavaArjuna coming from Arjuna.)
Anyway, over the years JavaSim and C++SIM have had quite a few users with several of them asking for feature requests or bug fixes. It's basically been me running the project since Dan left for pastures new. With my own work in Bluestone, HP etc. etc. I've found it harder and harder to get the time to do anything meaningful with either project. So I eventually persuaded the University to allow me to put the source code into open source, which would at least allow other people to work on it and take me out of the bottleneck. Unfortunately that was about a year ago and I've managed to do zero since then! Until today, when I finally requested a project on Codehaus. So maybe sometime in 2008 the code will appear when I find the time and inclination to do more.
In the meantime, work on a D-based transaction manager continues ;-) Why? Because it's there. I'm finding I like D more and more so this is at least an excercise in coming to grips with the language as anything else. Plus, look what happened to JavaArjuna!
Many years ago (towards the end of the 1980's) whilst working on our PhDs, Dan McCue and I developed a SIMULA-like simulation package in C++, cunningly called C++SIM. It was all tied up in the work we were doing at the time around replica placement to achieve high availability. It quickly took on a life of its own and we got requests to use it from a number of academic and commercial organisations. When Java came along it seemed like a good idea to port it across and JavaSim was born. (That "seemed like a good idea" principle was also what lead to JavaArjuna coming from Arjuna.)
Anyway, over the years JavaSim and C++SIM have had quite a few users with several of them asking for feature requests or bug fixes. It's basically been me running the project since Dan left for pastures new. With my own work in Bluestone, HP etc. etc. I've found it harder and harder to get the time to do anything meaningful with either project. So I eventually persuaded the University to allow me to put the source code into open source, which would at least allow other people to work on it and take me out of the bottleneck. Unfortunately that was about a year ago and I've managed to do zero since then! Until today, when I finally requested a project on Codehaus. So maybe sometime in 2008 the code will appear when I find the time and inclination to do more.
In the meantime, work on a D-based transaction manager continues ;-) Why? Because it's there. I'm finding I like D more and more so this is at least an excercise in coming to grips with the language as anything else. Plus, look what happened to JavaArjuna!
Vacation, contracts, annotations, papers etc.
Took a couple of days off this week, trying to use up what vacation I've got before the end of the year. It wasn't planned much in advance, so I didn't have any ideas of what to do. In the end I did some thinking about service contracts and how Java annotations could help developers define them within our ESB. I've also got yet more papers to review (these days each conference or workshop seems to bleed into one another and I'm finding it difficult to tell from the SOA/Web Services papers where one ends and one begins - which might not be a good thing for all of these workshops/conferences!) Plus I found time to write some more of the book that Thomas, Arnaud and I are doing with Thomas Erl (not sure if I've mentioned that before).
So not exactly a vacation but more relaxing than work!
So not exactly a vacation but more relaxing than work!
Wednesday, September 05, 2007
A good day's diving!
I took a day off yesterday (which means I only have to take 5 weeks vacation between now and the end of the year!) to go SCUBA diving. It's really hard finding good dive sites in the UK: OK we're an island, but the weather isn't great! Usually it's the North Sea or a lake in North Yorkshire that the Army uses for training. It does get repetitive after a while, so a new site is always nice to investigate. Which is what we did yesterday.
My diving buddy and I had heard about Capernwray, on the other side of the country to us, but somewhere the other divers talked a lot about. It only took us a couple of years to get round to it, but it was definitely worth the trip! A flooded quarry, with sunken planes, helicopters, horses and even a gnome garden. At its deepest it's 21 metres (yes, we went down to check) and the visibility was very good. Managed to get a couple of dives in before heading home. I can see this becoming a regular thing!
My diving buddy and I had heard about Capernwray, on the other side of the country to us, but somewhere the other divers talked a lot about. It only took us a couple of years to get round to it, but it was definitely worth the trip! A flooded quarry, with sunken planes, helicopters, horses and even a gnome garden. At its deepest it's 21 metres (yes, we went down to check) and the visibility was very good. Managed to get a couple of dives in before heading home. I can see this becoming a regular thing!
Sunday, September 02, 2007
Thursday, August 30, 2007
Heuristics, one-phase commit and compensations
It's a little known fact that as well as being the world's first Web Services transactions product, HP-WST also had some pretty neat non-Web Services capabilities that we're only now starting to revisit. I've been in the process of writing a paper on one of them for what seems like an age, so decided to give a brief outline here. But first a little background to put the rest into context.
One of the nice things we did with HP-WST from the start was keep the Web Services aspects separate from the core transaction engine. This is something we continued with XTS (now the Web Services transactions component of JBossTS). At the time the reason was that Jim and I needed to make parallel progress, with him concentrating on the SOAP stack (and doing some great work with the HP Web Services team at that time) and me on the protocol engine. Another reason for the separation was to try to make debugging of problems a little easier. One of the things you'll know if you've either developed a distributed system or used one, is that distributed debugging can be a PITA. It was bad enough with CORBA, but Web Services take it to another level. So we had this nice clean separation that meant you could actually configure the system (dynamically) to appear to be running the whole Web Services stack when in fact it wasn't going anywhere near the network. If you knew what you were doing (read: undocumented feature) you could configure this "loop-back" to happen either before or after the SOAP messages were created.
Now an important part of HP-WST was the compensation transaction model it supported. This was based on BTP at the time, but the idea still translates to WS-TX: instead of doing the work in the scope of a single transaction that holds on to locks and other resources for a potentially long time, you do the work in a series of smaller transactions that can each be compensated by some other transaction later. The coordinator (Atom or Cohesion in the case of BTP) remembers the list of participants and drives recovery in the event of failures, so even if your application crashes everything should be resolved.
Because of the "local transport" aspect of HP-WST, people were able to write compensations for local applications, completely ignoring the Web Services stack. Some lighthouse customers found that an interesting prospect. In particular when I was giving a presentation to one group in Madrid, we got on to something I'd been prototyping that offered a nice solution to the old problems of heuristics (how do I resolve a non-atomic transaction?) and having multiple one-phase commit participants in the same transaction (how do I resolve a non-atomic transaction?)
In both of these problem scenarios what typically happens is that someone (e.g., a system administrator) has to get to grips with the inconsistent data and figure out what was going on in the rest of the application in order to try to impose consistency. One of the important reasons this can't really happen automatically (at the TM level) is because it required semantic information about the application, that simply isn't available to the transaction system. They compensate manually.
Until then. What we were proposing was allowing developers to register compensation transactions with the coordinator that would be triggered upon certain events, such as heuristic outcomes or one-phase errors. And to do it opaquely as far as the application developer was concerned. Because these compensations are part of the transaction, they'd get logged so that they would be available during recovery. Plus, a developer could also define whether presumed abort, presumed commit or presumed nothing were the best approaches for the individual transaction to use (it has an affect on recovery and failure scenarios).
Nothing really earth shattering. We'd been offering this kind of thing for a long time through nested top-level transactions, for example. But HP-WST pushed it into a wider arena. With this approach you could write your compensations to try to undo the commit of the one-phase resource, for example, or if it can't be undone then write sufficient information to help the administrator resolve it. Likewise if triggered by a specific heuristic: try to compensate directly at the time the error occurs. Obviously nothing is ever guaranteed, but sometimes being able to try to compensate at the moment the problem happens can save you time and money later.
Now where this becomes more interesting is when you consider annotations. Back in 2000 they didn't exist and we were playing with raw XML or explicit declarative approaches (the latter was a problem because we wanted to be able to apply this to existing deployments without requiring them to be re-coded). But annotations and the work that Maciej has been doing, mean that revisiting this could result in something more powerful and certainly more opaque.
And on that note, back to work (and maybe the paper). Hopefully this has been enough to wet your appetite.
One of the nice things we did with HP-WST from the start was keep the Web Services aspects separate from the core transaction engine. This is something we continued with XTS (now the Web Services transactions component of JBossTS). At the time the reason was that Jim and I needed to make parallel progress, with him concentrating on the SOAP stack (and doing some great work with the HP Web Services team at that time) and me on the protocol engine. Another reason for the separation was to try to make debugging of problems a little easier. One of the things you'll know if you've either developed a distributed system or used one, is that distributed debugging can be a PITA. It was bad enough with CORBA, but Web Services take it to another level. So we had this nice clean separation that meant you could actually configure the system (dynamically) to appear to be running the whole Web Services stack when in fact it wasn't going anywhere near the network. If you knew what you were doing (read: undocumented feature) you could configure this "loop-back" to happen either before or after the SOAP messages were created.
Now an important part of HP-WST was the compensation transaction model it supported. This was based on BTP at the time, but the idea still translates to WS-TX: instead of doing the work in the scope of a single transaction that holds on to locks and other resources for a potentially long time, you do the work in a series of smaller transactions that can each be compensated by some other transaction later. The coordinator (Atom or Cohesion in the case of BTP) remembers the list of participants and drives recovery in the event of failures, so even if your application crashes everything should be resolved.
Because of the "local transport" aspect of HP-WST, people were able to write compensations for local applications, completely ignoring the Web Services stack. Some lighthouse customers found that an interesting prospect. In particular when I was giving a presentation to one group in Madrid, we got on to something I'd been prototyping that offered a nice solution to the old problems of heuristics (how do I resolve a non-atomic transaction?) and having multiple one-phase commit participants in the same transaction (how do I resolve a non-atomic transaction?)
In both of these problem scenarios what typically happens is that someone (e.g., a system administrator) has to get to grips with the inconsistent data and figure out what was going on in the rest of the application in order to try to impose consistency. One of the important reasons this can't really happen automatically (at the TM level) is because it required semantic information about the application, that simply isn't available to the transaction system. They compensate manually.
Until then. What we were proposing was allowing developers to register compensation transactions with the coordinator that would be triggered upon certain events, such as heuristic outcomes or one-phase errors. And to do it opaquely as far as the application developer was concerned. Because these compensations are part of the transaction, they'd get logged so that they would be available during recovery. Plus, a developer could also define whether presumed abort, presumed commit or presumed nothing were the best approaches for the individual transaction to use (it has an affect on recovery and failure scenarios).
Nothing really earth shattering. We'd been offering this kind of thing for a long time through nested top-level transactions, for example. But HP-WST pushed it into a wider arena. With this approach you could write your compensations to try to undo the commit of the one-phase resource, for example, or if it can't be undone then write sufficient information to help the administrator resolve it. Likewise if triggered by a specific heuristic: try to compensate directly at the time the error occurs. Obviously nothing is ever guaranteed, but sometimes being able to try to compensate at the moment the problem happens can save you time and money later.
Now where this becomes more interesting is when you consider annotations. Back in 2000 they didn't exist and we were playing with raw XML or explicit declarative approaches (the latter was a problem because we wanted to be able to apply this to existing deployments without requiring them to be re-coded). But annotations and the work that Maciej has been doing, mean that revisiting this could result in something more powerful and certainly more opaque.
And on that note, back to work (and maybe the paper). Hopefully this has been enough to wet your appetite.
Sunday, August 26, 2007
XA versus WS-TX?
I'm really not quite sure what to say about this article. While the author is right that XA is more mature than WS-TX and that transactions are an important tool in an achitect's tool-belt, saying that XA is a replacement for Web Services transactions is a bit like saying that because IIOP is more mature than SOAP we should all be using it. It's true, but it's never going to happen and overlooks what Web Services bring to distributed transactions: interoperability. I've written about that many times, so won't go over that again.
It's nice to hear that Oracle have identified problems with WS-TX. We all have throughout the evolution of the specifications/standard. WS-CAF offered a better solution over all, but didn't get the backing of IBM and MSFT, which is unfortunate: I still think that from an enterprise perspective all of the specifications within WS-CAF have technical advantages over WS-TX.
However, who hasn't identified problems in the way different XA implementations interpret the XA specification? Last time I looked, we had several workarounds for the differences between Oracle 9i and 10g, let alone how they differ between DB2 and SQLServer. Of course many of these are down to bugs in the respective XA implementation or wrong interpretations of the specification, but just saying something is XA compliant doesn't mean it immediately has a level of maturity.
WS-AT (or WS-ACID in WS-CAF), was developed to allow arbitrary two-phase commit participants to be enrolled in a transaction. Quite similar to OTS in that regard. Obviously XA is important, so it should be considered when providing any new transaction standard, but 2PC existed before XA, so it makes sense to not limit yourself if you don't have to. On that note, I hope I'm not alone in remembering the original XAML?!
It's nice to hear that Oracle have identified problems with WS-TX. We all have throughout the evolution of the specifications/standard. WS-CAF offered a better solution over all, but didn't get the backing of IBM and MSFT, which is unfortunate: I still think that from an enterprise perspective all of the specifications within WS-CAF have technical advantages over WS-TX.
However, who hasn't identified problems in the way different XA implementations interpret the XA specification? Last time I looked, we had several workarounds for the differences between Oracle 9i and 10g, let alone how they differ between DB2 and SQLServer. Of course many of these are down to bugs in the respective XA implementation or wrong interpretations of the specification, but just saying something is XA compliant doesn't mean it immediately has a level of maturity.
WS-AT (or WS-ACID in WS-CAF), was developed to allow arbitrary two-phase commit participants to be enrolled in a transaction. Quite similar to OTS in that regard. Obviously XA is important, so it should be considered when providing any new transaction standard, but 2PC existed before XA, so it makes sense to not limit yourself if you don't have to. On that note, I hope I'm not alone in remembering the original XAML?!
Edwin's back
Via Greg, I see that Edwin is getting back in the game. I met Edwin a couple of times when we were working with Collaxa on integrating our XTS product with their BPEL product. Unfortunately some database vendor came along and message that one up ;-)
Friday, August 24, 2007
OpenCSA Plenary is coming up
As I mentioned on Infoq, the OpenCSA Plenary is coming in the next few weeks. This will be the first time that people from outside the original authors will be able to give their input on SCA directly to the authors. One way or another it will definitely be interesting. I'd love to be able to go, but it clashes with other things I've had planned for a long time. If you're at all interested in SCA and/or want to give feedback, go along and/or sign up to the various technical committees.
Synchronous versus Asynchronous
Pat makes a very good point. Something that also drives me nuts. This has actually gotten worse in the Web Services world, where people continually talk about asychronous invocations, where they're really talking about synchronous one-way invocations. Believe it or not, there is a significant difference!
Thursday, August 09, 2007
You know you're getting on when ...
1) You go on vacation and visit a toy museum only to find that many of the items on show are things you had when you were a kid.
2) You take your 13 year old son to the airport for his first unaccompanied flight to see his grandparents in Canada.
2) You take your 13 year old son to the airport for his first unaccompanied flight to see his grandparents in Canada.
Monday, July 30, 2007
Tuesday, July 17, 2007
Is anyone out there?
Our industry is always revisiting past debates. The only thing that changes is the periodicity. However, in the past when arguments raged around CORBA versus DCOM, or Mac versus PC (for example), you could be forgiven for not knowing what was said in the past, because online archiving facilities were ad hoc and references to debates within hardcopy literature required access to that literature. However, with the advent of Google there are fewer excuses for not doing your homework before re-launching those arguments. These days it's REST versus SOA, or OSGi versus Sun, but the background material exists in copious amounts if you want to search for it.
However, one debate I had hoped never to see again was: WS-AT Considered Harmful to SOA. No sh*t Sherlock! But then we've been telling people that for years. It's a bit like saying: jumping out of an airplane without a parachute will kill you. Or: playing with electrics in the bath is a bad idea. These are fairly obvious truths and I'd hoped that the same was the case with the use cases for WS-AT. But maybe not!
Many of us (the group of various WS transaction specification/standards authors) have been saying that "one size does not fit all" for years: it's why all of the specifications/standards supported multiple models. Don't get hung up on two phase commit. It's a consensus protocol. Using it does not mean you are using all ACID properties of a traditional transaction. Protcols like WS-AT and WS-ACID exist for one very important reason: interoperability. As I said on Arnon's blog:
"WS-AT, WS-ACID (from WS-CAF) and Atoms (from BTP) were never intended to be used for loosely coupled interactions. They are there for short duration interactions between heterogeneous transaction system implementations. If you've ever tried to get CICS to talk to Tux, or Tux to talk to JBossTS (as examples), then you'll know that it's not easy to do out-of-the-box! We tried to do transaction interoperability in the OMG through the OTS, but that took a long time and not everyone supports OTS/JTS (when was the last time you saw Microsoft with an OTS implementation?) Web Services offer interoperability as one of their main benefits. We now have demonstrated transactional interoperability between all of the major TP vendors (excluding BEA, who haven't got a WS-AT implementation). For customers who have heterpogeneous implementations, then this is a critical component.
As authors of the specifications, we have *never* said that WS-AT et al should be used for long duration interactions. Over the past 8 years the reasons for not doing so have been well documented. If people are still arguing this question then they're not reading the literature, which basically agrees with them anyway!"
Web Services != SOA. Just because you're writing applications with Web Services doesn't mean you are (or necessarily need to) developing with SOA principles in mind. That does not mean Web Services are less important a technology. But it seems to be something that people are still having trouble understanding.
Now we can't prevent people from using WS-AT in other situations. Just like we can't prevent people from using electrical devices in the bath. But in that case, you'd better watch out and know what you're doing and why you are doing it!
However, one debate I had hoped never to see again was: WS-AT Considered Harmful to SOA. No sh*t Sherlock! But then we've been telling people that for years. It's a bit like saying: jumping out of an airplane without a parachute will kill you. Or: playing with electrics in the bath is a bad idea. These are fairly obvious truths and I'd hoped that the same was the case with the use cases for WS-AT. But maybe not!
Many of us (the group of various WS transaction specification/standards authors) have been saying that "one size does not fit all" for years: it's why all of the specifications/standards supported multiple models. Don't get hung up on two phase commit. It's a consensus protocol. Using it does not mean you are using all ACID properties of a traditional transaction. Protcols like WS-AT and WS-ACID exist for one very important reason: interoperability. As I said on Arnon's blog:
"WS-AT, WS-ACID (from WS-CAF) and Atoms (from BTP) were never intended to be used for loosely coupled interactions. They are there for short duration interactions between heterogeneous transaction system implementations. If you've ever tried to get CICS to talk to Tux, or Tux to talk to JBossTS (as examples), then you'll know that it's not easy to do out-of-the-box! We tried to do transaction interoperability in the OMG through the OTS, but that took a long time and not everyone supports OTS/JTS (when was the last time you saw Microsoft with an OTS implementation?) Web Services offer interoperability as one of their main benefits. We now have demonstrated transactional interoperability between all of the major TP vendors (excluding BEA, who haven't got a WS-AT implementation). For customers who have heterpogeneous implementations, then this is a critical component.
As authors of the specifications, we have *never* said that WS-AT et al should be used for long duration interactions. Over the past 8 years the reasons for not doing so have been well documented. If people are still arguing this question then they're not reading the literature, which basically agrees with them anyway!"
Web Services != SOA. Just because you're writing applications with Web Services doesn't mean you are (or necessarily need to) developing with SOA principles in mind. That does not mean Web Services are less important a technology. But it seems to be something that people are still having trouble understanding.
Now we can't prevent people from using WS-AT in other situations. Just like we can't prevent people from using electrical devices in the bath. But in that case, you'd better watch out and know what you're doing and why you are doing it!
Sunday, July 15, 2007
Change of role? Not quite.
I forgot to mention, but back in June I was made Director of Engineering at Red Hat. Some people who've heard about this at the time have asked me what difference does it make to my role? At the moment I'm not sure it makes any difference: I seem to be doing the same things day and night.
Friday, July 13, 2007
Feeling old!
Many things can make you feel old. For instance, it's my youngest son's 5th birthday party tomorrow, and yet it seems like yesterday when we brought him back from the hospital.
Then of course there's looking back over the work you've done. These days I tend not to have time to think about what happened last week let alone what happened twenty years ago! So it was actually quite interesting for me to spend time and try to put the various components within JBossTS into perspective for some of our newest users.
I started with the core, then went on to JTA and JTS, finishing with XTS. I didn't say as much as I could (time didn't permit), but the links within the entries cover everything. It was good to recollect for a change.
Then of course there's looking back over the work you've done. These days I tend not to have time to think about what happened last week let alone what happened twenty years ago! So it was actually quite interesting for me to spend time and try to put the various components within JBossTS into perspective for some of our newest users.
I started with the core, then went on to JTA and JTS, finishing with XTS. I didn't say as much as I could (time didn't permit), but the links within the entries cover everything. It was good to recollect for a change.
Thursday, June 28, 2007
Tuesday, June 26, 2007
Middleware for Service-Oriented Computing Workshop
I'm on the PC again and the submission deadline in 26th of July. Check out the CFP.
Wednesday, May 30, 2007
Some form of closure?
Some of you may know that in the 9/11 attacks, I lost a friend, Ed Felt, on United 93. Ed had been traveling to San Francisco on business. I obviouslty feel bad enough about the attacks, but to also lose a friend drives it home much more. Plus, and this is something I've never really shouted about (for obvious reasons), I was meant to be on that flight! The only reason I canceled was because I'd just got married and had promised my wife I'd not travel out of the country for 6 months after the wedding.
So you can probably understand why it was with some trepidation that I finally watched the United 93 movie. I deliberately took a while to buy it on DVD and I've never watched it on TV. But last night I decided to bite the bullet and sit down to watch it alone. I've read reports and seen documentaries over the years about the whole event, but this movie was good in showing the utter chaos and confusion that was going on at the time. The scenes on the plane itself were the hardest: I fly a lot and could empathise with the passengers just from that perspective, but when I considered that I could have been there ...
I'm not sure if I'll ever watch the film again. I don't think I have to in order to understand what Ed's family and those of the other victims, are still going through today. I'm one of the lucky ones, but this is something that will always stay with me.
So you can probably understand why it was with some trepidation that I finally watched the United 93 movie. I deliberately took a while to buy it on DVD and I've never watched it on TV. But last night I decided to bite the bullet and sit down to watch it alone. I've read reports and seen documentaries over the years about the whole event, but this movie was good in showing the utter chaos and confusion that was going on at the time. The scenes on the plane itself were the hardest: I fly a lot and could empathise with the passengers just from that perspective, but when I considered that I could have been there ...
I'm not sure if I'll ever watch the film again. I don't think I have to in order to understand what Ed's family and those of the other victims, are still going through today. I'm one of the lucky ones, but this is something that will always stay with me.
Tuesday, May 22, 2007
Woo Hoo!!
Some good news. I don't think I have to do an acceptance speech, but I'm really glad to be on the team. SCA is definitely an important industry standard in the making.
Friday, May 11, 2007
Wonderland and JavaOne
Probably the best presentation I saw at JavaOne was left to last. James Gosling previewed Wonderland. At first I thought this was just yet another Second Life, avatar-based demonstration. However, when they started showing the avatars working within the virtual world with the same programs (Firefox, OpenOffice, etc.) as you'd use in the real world, it took on a completely new importance. This was very cool. Being able to create rooms where you could display your code dynamically and updateable (on "glass" walls, no les ;-) as though you were in a physical equivalent, show presentations, have true meetings where (almost) everything you'd want to do if you were face-to-face could be done, is a significant step forwards. Nice.
JavaOne 2007
I'm at JavaOne once again, to give a couple of BOFS (one on transaction bridging with the other members of the JBossTS team and one on JBI 2.0. Over all I haven't been as impressed with the conference this year as in previous years, but maybe I've just chosen to go to the wrong presentations and BOFs. Plus, why why why why why can't Sun manage to set up a decent wireless network for the conference? For a company who used to talk about The Network is the Computer, they do a pretty bad job of it year on year!
Wednesday, April 25, 2007
Red Hat and MetaMatrix
I first came across MetaMatrix at JavaOne in 2002/2003 when at Arjuna and we did some work with them over the next year or so. This should be interesting.
Saturday, April 21, 2007
Monday, March 26, 2007
Anyone know Chinese?
Apparently this is one of my articles in Chinese. Not knowing the language, I have to believe it, although I do wonder if this is a case of The Hungarian Phrasebook again.
Saturday, March 24, 2007
TDM
It's been a while since I've really been able to blog and that's all down to lack of time. Towards the end of last year, as part of the reorganisation of JBoss under Red Hat I was made Technical Development Manager of our SOA platform. What that basically means is that I'm in charge of the technical direction, development and productisation of everything SOA within the company. I went from managing 2 groups to 7 over night, with all that entails. Pretty interesting, I have to say, but definitely a time sink!
Wednesday, March 21, 2007
Arjuna and the history of middleware
Wolfgang has written a great paper on the impact of research on middleware. Arjuna gets several references, although there are a few inaccuracies (e.g., Arjuna Technologies wasn't sold to JBoss). Well worth a read.
Tuesday, March 20, 2007
John Backus dies
Monday, March 05, 2007
Red Hat and Exadel
This is interesting news in a number of ways. I think it's great news for our products and projects.
Tuesday, February 27, 2007
Nooooooooooo!!!!!
This just got pointed out to me through the minutes of the first day at the W3C workshop onWeb of Services. I must have missed this stirling move on the part of the TAG. I've mentioned before about sessions and Web Services and how WS-Addressing is not the right way of doing this, but WS-Context is better. Of course it's no surprise to see that contributions to this TAG effort are from heavy WS-Addressing users, but us WS-Context supporters need to be more pro-active.
Thursday, February 22, 2007
Vinoski leaves IONA
I meant to post this last week but work got in the way. I first met Steve back in the early 1990's at a Usenix conference where we were both presenting. He'd recently joined IONA from HP and I knew of him because of his great work on ORB Plus, still one of the best implementations. We had several Guinness drinking sessions and he could easily drink most people under the table! We kept in touch over the years though I don't think I've drunk as much since! I wish him the best in his new job.
Thursday, February 15, 2007
Red Hat and interoperability
I'm not sure if this got much publicity, but it's worth checking out. Interoperability with different vendors is very important. Web Services are a way of achieving some interoperability, but they're not the entire solution.
Monday, February 12, 2007
Thursday, February 01, 2007
Update on Jim
I hope this is wrong and he turns up! It's good to see the community coming together to help the search.
Wednesday, January 31, 2007
Tuesday, January 09, 2007
Wednesday, January 03, 2007
Submit a paper
I forgot to mention, but Eric asked me to be on the program committee for the W3C workshop on Web Services for Enterprise Computing. The deadline is fast approaching, so if you've got a paper or a position on the topic, please submit it.
Tuesday, January 02, 2007
WS-Context progressing to standard
This dropped through the cracks, but I'm pleased to say that WS-Context has passed the TC vote to proceed towards a standard.
Subscribe to:
Posts (Atom)