Well the CFP for JavaOne 2005 is out. A quick perusal over what they want this year shows that maybe they've started to listen to the attendees. I've been going to JavaOne for quite a few years and have presented at a couple. However, over the years I've noticed the attendance level drop and my own opinion is that there are a few reasons for this:
(i) at the start, it was almost impossible to get a presentation accepted unless you were a Sun employee or at least that's how it seemed. I remember looking through the agendas for early JavaOne's and playing "spot the non-Sun employee" - it was hard.
(ii) following on from (i), I think it became hard for people to get information that was truly for the community and from the community. OK, Sun brought us Java, but as a percentage of the people who actually use and develop the technology, are they really that influential? So, the early JavaOne's were hype-shows, with not a lot of content. I remember one friend commenting to me after one early show "This conference intentionally left blank." and he was pretty much on the mark. I suspect this kind of comment wasn't unique and had knock-on effects.
(iii) JavaOne became a "show" and not a conference/workshop. Even the BOFs started to suffer. Unfortunately the marketing still pushed "conference". Now when I attend a conference I have certain expectations for the types of presentation and their technical detail. Definitely a different expectation for a "show".
So it looks like Sun have finally started to listen to the people who attend these conferences and turn the tide away from a "show". Let's hope it works because JavaOne should be the place to go if you're into Java.
I work for Red Hat, where I lead JBoss technical direction and research/development. Prior to this I was SOA Technical Development Manager and Director of Standards. I was Chief Architect and co-founder at Arjuna Technologies, an HP spin-off (where I was a Distinguished Engineer). I've been working in the area of reliable distributed systems since the mid-80's. My PhD was on fault-tolerant distributed systems, replication and transactions. I'm also a Professor at Newcastle University and Lyon.
Thursday, December 23, 2004
Semantic Web
So, I've been doing some of my paper reviews for WWW 2005. Some interesting stuff. More than 50% of the papers I've reviewed so far have been on the Semantic Web. Now I haven't had too much to do with that technically, but I've seen enough discussions elsewhere to know that some people see this as a fundamental building block for Web Services, whereas others see it more as an application that lives on Web Services. I have to say I fall into the latter camp, but if there are some good pointers to the former, let me know.
Saturday, December 18, 2004
Down time
So Christmas is getting near and apparently the goose is getting fat (but I ain't putting a penny in the old man's hat!) My last official day at work this year was Friday 17th, but in reality I'll be working over the "vacation", as anyone who knows me can attest. I see this more as "working from home" than "holiday", but them's the realities.
I've made a promise to my family to do no work on Christmas Day and New Year's Day, which since I've had kids isn't hard to accomplish - lots to keep me busy: lending moral support to the turkey as it gets cooked, eating the turkey afterwards and playing with the kids. Very different to the mid-90's, when I remember writing the first version of the Arjuna Transaction Service over Christmas Eve/Christmas Day (did a lot of its testing while watching The Lady in White, strangely enough - excellent film). Back then it was called JavaArjuna and was in use in the University before there even was a JTS or JTA specification from Sun. Hence the tag-line we still use today: that it was the worlds first 100% pure Java transaction service implementation. It looks a lot different today, thanks to the efforts of everyone over the years here and all of our other incarnations.
Anyway, this break is already looking busy, what with WWW 2005 papers to review (4 down and another 14 to go), the usual plethora of work commitments, updates to WS-CF with Eric and Greg and the odd teleconference. Oh, and a blog entry now and then. Believe it or not, I'm kind of looking forward to the festive season.
I've made a promise to my family to do no work on Christmas Day and New Year's Day, which since I've had kids isn't hard to accomplish - lots to keep me busy: lending moral support to the turkey as it gets cooked, eating the turkey afterwards and playing with the kids. Very different to the mid-90's, when I remember writing the first version of the Arjuna Transaction Service over Christmas Eve/Christmas Day (did a lot of its testing while watching The Lady in White, strangely enough - excellent film). Back then it was called JavaArjuna and was in use in the University before there even was a JTS or JTA specification from Sun. Hence the tag-line we still use today: that it was the worlds first 100% pure Java transaction service implementation. It looks a lot different today, thanks to the efforts of everyone over the years here and all of our other incarnations.
Anyway, this break is already looking busy, what with WWW 2005 papers to review (4 down and another 14 to go), the usual plethora of work commitments, updates to WS-CF with Eric and Greg and the odd teleconference. Oh, and a blog entry now and then. Believe it or not, I'm kind of looking forward to the festive season.
What next in the weird and wacky world of Web Services?
So we've got SOAP, UDDI, WS-Addressing (almost), transactions (one flavour or another), coordination (two), workflow (BPEL and CDL), WS-Context, security, and a host of other specifications (not many standards yet). Question is: what's next? Life-cycle is an obvious one, though there are more ways of doing it wrong for SOA/Web Services than of doing it right, so I'll hold my breath for now. Persistence? Surely a back-end implementation choice. Trading? Maybe - no, it's not the same as UDDI. QoS - Policy should cover that.
Maybe we're reaching a plateau and the plethora of specifications will be subject to Darwinian evolution. One can only hope.
If you've any ideas about the sorts of things you like to see (or really hate to see), drop me a comment.
Maybe we're reaching a plateau and the plethora of specifications will be subject to Darwinian evolution. One can only hope.
If you've any ideas about the sorts of things you like to see (or really hate to see), drop me a comment.
Friday, December 17, 2004
Implementations and specifications
In his blog, Dave argues that implementation details that creep into specifications/standards aren't a bad thing. He's right in that implementation experience needs to play an important part in the development of specifications and standards. This isn't anything new of course. You only have to look at the OMG, the JCP, the Open Group and a host of other groups/efforts over many years to see where this is true.
However, where Dave is wrong is in his implicit subtext, that the WS-Addressing specification should be taken as is by the working group because it is based on implementations. He misses the point entirely. Just look back at the small section of examples where I said implementations+specifications work well. These are all based on collaborative input from a wide range of vendors (and academics). What Dave seems to assume is that because IBM, BEA and MSFT have implemented WS-Addressing and then submitted it for standarization, that process should assume that those guys know best.
When we submitted the original WS-CAF work to OASIS to form the OASIS WS-CAF Technical Committee I suppose we could have said "rubber stamp this effort". But that's hardly an open all inclusive process, now is it? Our original work was simply a starting point and we were aiming to get more people and experiences involved. The results have been some pretty radical changes from what we originally submitted, but sobeit: that's the way the community as a whole have decided to move.
Now Dave keeps banging on about how addressing needs to be out there quickly, so we need to fast-track the work. Fair enough. But that doesn't mean that objections on aspects on the specification from companies and individuals who weren't involved in it originally should be ignored, or classified as unimportant. All the ones I've seen so far have been based on implementation experience too!
However, where Dave is wrong is in his implicit subtext, that the WS-Addressing specification should be taken as is by the working group because it is based on implementations. He misses the point entirely. Just look back at the small section of examples where I said implementations+specifications work well. These are all based on collaborative input from a wide range of vendors (and academics). What Dave seems to assume is that because IBM, BEA and MSFT have implemented WS-Addressing and then submitted it for standarization, that process should assume that those guys know best.
When we submitted the original WS-CAF work to OASIS to form the OASIS WS-CAF Technical Committee I suppose we could have said "rubber stamp this effort". But that's hardly an open all inclusive process, now is it? Our original work was simply a starting point and we were aiming to get more people and experiences involved. The results have been some pretty radical changes from what we originally submitted, but sobeit: that's the way the community as a whole have decided to move.
Now Dave keeps banging on about how addressing needs to be out there quickly, so we need to fast-track the work. Fair enough. But that doesn't mean that objections on aspects on the specification from companies and individuals who weren't involved in it originally should be ignored, or classified as unimportant. All the ones I've seen so far have been based on implementation experience too!
Thursday, December 16, 2004
I'm an exe
I just took the test here and it turns out my file extension is exe. I'm not sure if that's good or bad. May take the test again later.
When is a door not a door? When it's a jar!
So when is an optional feature not an optional feature? When it's required.
OK, that's not as funny as the title (was that funny?), but it's not meant to be. We're doing some work with some other vendors in the Web Services space (names withheld to protect the innocent) and this requires us to use a couple of Web Services specifications that have various optional fields defined in them. We have implementations and they have implementations and we need to talk to one another. Now, my reading of optional is:
(i) it doesn't have to be there;
(ii) if it is present, then you can use it (maybe there are preference rules associated with when you can or should use it);
(iii) if it isn't there, then you should be able to deal with that.
Certainly the specifications I'm referring to here make it clear what optionality means and it's covered in those 3 rules.
Unfortunately the implementation of the foreign system we're working with takes a different approach. In some cases:
(a) if the optional field is present, then it assumes the message is invalid;
(b) if the optional field isn't present, then it assumes the message is invalid.
Oh, and neither (a) nor (b) are publicized anywhere. Very annoying. Now I'm not entirely against an implementation requirement for something that is optional. But if such a thing exists, it should be mentioned somewhere.
Now here's the punchline: we're doing this for interoperability! Laugh? I nearly cried!
OK, that's not as funny as the title (was that funny?), but it's not meant to be. We're doing some work with some other vendors in the Web Services space (names withheld to protect the innocent) and this requires us to use a couple of Web Services specifications that have various optional fields defined in them. We have implementations and they have implementations and we need to talk to one another. Now, my reading of optional is:
(i) it doesn't have to be there;
(ii) if it is present, then you can use it (maybe there are preference rules associated with when you can or should use it);
(iii) if it isn't there, then you should be able to deal with that.
Certainly the specifications I'm referring to here make it clear what optionality means and it's covered in those 3 rules.
Unfortunately the implementation of the foreign system we're working with takes a different approach. In some cases:
(a) if the optional field is present, then it assumes the message is invalid;
(b) if the optional field isn't present, then it assumes the message is invalid.
Oh, and neither (a) nor (b) are publicized anywhere. Very annoying. Now I'm not entirely against an implementation requirement for something that is optional. But if such a thing exists, it should be mentioned somewhere.
Now here's the punchline: we're doing this for interoperability! Laugh? I nearly cried!
Wednesday, December 15, 2004
Eric Newcomer on WS-CAF and Web Services Transactions
Eric gives a really good overview of some of the work we've been doing over the past few years (more than I care to remember) on Web Services transactions. I'd encourage you to check it out.
I wrote an article with colleagues from IBM a while back which addressed some of Roger's themes.
I wrote an article with colleagues from IBM a while back which addressed some of Roger's themes.
Monday, December 13, 2004
Press release for WS-Context
I've mentioned a few times that we took part in an interoperability demonstrator at XML 2004 for WS-Context. Well we've just released a press release with our friends at IONA Technologies and Oracle.
Web Services transactions, entropy, heuristics and the information society
Imagine you walk into a bank and want to perform a transaction (banks are very useful things in transaction examples). That transaction involves you transferring money from one account (savings) to another (current). You obviously want this to happen with some kind of guarantee, so for the sake of this example let's assume we use an ACID transaction.
Now there's no such thing as a guarantee where physical media are concerned. The second law of thermodynamics states that entropy always increases and entropy is related to the level of chaos/disorder in the universe. Put simply, a less entropic system is more ordered and a more entropic system is more chaotic. I won't go into what the definitions of "order" and "chaos" are here, but another way of looking at this is to consider what happens when you buy an apple (the fruit, not the hardware!): it's fairly "ordered" in that the molecules that go to make it up are pretty much all "apple". However, if you leave it in the fruit bowl for too long it goes wrinkly and fuzzy with mould and eventually starts to decay entirely. (Kind of reminds me of some of the "experiments" we used to do in my undergraduate days to see how long unwashed plates would take to mould-over - though looking back I think they were really excuses for not washing up and nothing to do with physics experiments!)
Anyway, back to the apple. Over time, the molecules break down from the action of light, natural chemical reactions etc. The molecules form a host of other molecules and become less ordered, i.e., more entropy enters the system.
This is a very long winded way of saying that everything decays eventually. The same thing that happens to the apple happens to physical media. And statistics/probabilities say that even a new hard disk can fail on the first use.
So, in our bank example, despite the fact that we're using transactions and assuming that the transaction system is reliable, certain failures will always occur, given enough time and probabilities. The kinds of failure were interested in for this example are those that occur after the participants in the two-phase commit transaction have said they will do the work requested of them (transfer the money)i.e., during the second (commit) phase. So, the money has been moved out of the savings account (it's really gone) and is being added to the current account, when the disk hosting the
current account dies. Usually what this means is that we have a non-atomic outcome, or a heuristic outcome: the transaction coordinator has said commit, one participant (savings account) has said DONE, but the second one (current account) has said OOPS. There's no going back with the work the savings participant has done, so this transaction isn't going to be atomic (all or nothing).
Most enterprise transaction specifications and implementations allow for this via a heuristic error. This basically means that the transaction system can be informed (and hence can inform) that such an error has happened. There's not a lot that can be done automatically to fix these types of error. They often require semantic information about the application in order to restore consistency, so have to be handled by a system administrator. However, the important thing is that someone knows there's been a problem.
Imagine that this error happens and you don't know about it! Or at least don't know about it until the next time you check your account. Not good. Personally I'd like to know if there's been a screw-up as soon as possible. In our bank scenario, I can go and talk to someone in the branch. If I was doing this via the internet there's usually a number I can call to talk to someone (probably located in a different country these days ;-)
Now why is this important? Well, there are a few Web Services transactions specifications around that can be used in this scenario. BTP, WS-Atomic Transaction and WS-ACID Transaction. The first and last both allow for heuristic-like errors to be sent from participant to coordinator and from coordinator to end-user, whereas the second one (from IBM, Microsoft and BEA) doesn't. This seems like a strange omission, because errors do happen.
OK, it's not as bad as might first seem. Of course I can use WS-Atomic Transaction to communicate these errors. Unfortunately I just can't do it within the specification. I'd have to overload SOAP faults (for example), or maybe use some proprietary extension (repeat after me: vendor lock-in is not good). Not exactly good for interoperability and/or portability. The fact that protocols like WS-Atomic Transaction and WS-ACID Transaction are really meant for interoperability of existing transaction service implementations (e.g., Tuxedo-to-CICS, or ATS-to-Encina), where heuristics originated, makes this omission even more striking.
Oh well. Maybe failures don't happen. The 2nd law of thermodynamics does fall down if time flows backwards ;-)
Now there's no such thing as a guarantee where physical media are concerned. The second law of thermodynamics states that entropy always increases and entropy is related to the level of chaos/disorder in the universe. Put simply, a less entropic system is more ordered and a more entropic system is more chaotic. I won't go into what the definitions of "order" and "chaos" are here, but another way of looking at this is to consider what happens when you buy an apple (the fruit, not the hardware!): it's fairly "ordered" in that the molecules that go to make it up are pretty much all "apple". However, if you leave it in the fruit bowl for too long it goes wrinkly and fuzzy with mould and eventually starts to decay entirely. (Kind of reminds me of some of the "experiments" we used to do in my undergraduate days to see how long unwashed plates would take to mould-over - though looking back I think they were really excuses for not washing up and nothing to do with physics experiments!)
Anyway, back to the apple. Over time, the molecules break down from the action of light, natural chemical reactions etc. The molecules form a host of other molecules and become less ordered, i.e., more entropy enters the system.
This is a very long winded way of saying that everything decays eventually. The same thing that happens to the apple happens to physical media. And statistics/probabilities say that even a new hard disk can fail on the first use.
So, in our bank example, despite the fact that we're using transactions and assuming that the transaction system is reliable, certain failures will always occur, given enough time and probabilities. The kinds of failure were interested in for this example are those that occur after the participants in the two-phase commit transaction have said they will do the work requested of them (transfer the money)i.e., during the second (commit) phase. So, the money has been moved out of the savings account (it's really gone) and is being added to the current account, when the disk hosting the
current account dies. Usually what this means is that we have a non-atomic outcome, or a heuristic outcome: the transaction coordinator has said commit, one participant (savings account) has said DONE, but the second one (current account) has said OOPS. There's no going back with the work the savings participant has done, so this transaction isn't going to be atomic (all or nothing).
Most enterprise transaction specifications and implementations allow for this via a heuristic error. This basically means that the transaction system can be informed (and hence can inform) that such an error has happened. There's not a lot that can be done automatically to fix these types of error. They often require semantic information about the application in order to restore consistency, so have to be handled by a system administrator. However, the important thing is that someone knows there's been a problem.
Imagine that this error happens and you don't know about it! Or at least don't know about it until the next time you check your account. Not good. Personally I'd like to know if there's been a screw-up as soon as possible. In our bank scenario, I can go and talk to someone in the branch. If I was doing this via the internet there's usually a number I can call to talk to someone (probably located in a different country these days ;-)
Now why is this important? Well, there are a few Web Services transactions specifications around that can be used in this scenario. BTP, WS-Atomic Transaction and WS-ACID Transaction. The first and last both allow for heuristic-like errors to be sent from participant to coordinator and from coordinator to end-user, whereas the second one (from IBM, Microsoft and BEA) doesn't. This seems like a strange omission, because errors do happen.
OK, it's not as bad as might first seem. Of course I can use WS-Atomic Transaction to communicate these errors. Unfortunately I just can't do it within the specification. I'd have to overload SOAP faults (for example), or maybe use some proprietary extension (repeat after me: vendor lock-in is not good). Not exactly good for interoperability and/or portability. The fact that protocols like WS-Atomic Transaction and WS-ACID Transaction are really meant for interoperability of existing transaction service implementations (e.g., Tuxedo-to-CICS, or ATS-to-Encina), where heuristics originated, makes this omission even more striking.
Oh well. Maybe failures don't happen. The 2nd law of thermodynamics does fall down if time flows backwards ;-)
Monday, December 06, 2004
XML 2004 paper and presentation available
Well the people at IDEAlliance who hosted XML 2004 have put the papers and presentations online. (No idea why they are hosted at different places!)
The paper that Doug Bunting and I wrote on WS-CAF is available here and the presentation is here. As we said on the day, if you only walk away from the paper/presentation with one thing then I'd hope it is that there are some pretty important holes in the current Web Services architecture. WS-CAF presents one possible solution to some of these holes, but as with most things in computer science, there are other possible ways solutions. But it's the problem space that is important: without generally agreed upon solutions, Web Services are going to continue to be fragmented and disjointed.
Anyway, overall it was a pretty good conference, so I'd encourage you to check out more than our paper.
The paper that Doug Bunting and I wrote on WS-CAF is available here and the presentation is here. As we said on the day, if you only walk away from the paper/presentation with one thing then I'd hope it is that there are some pretty important holes in the current Web Services architecture. WS-CAF presents one possible solution to some of these holes, but as with most things in computer science, there are other possible ways solutions. But it's the problem space that is important: without generally agreed upon solutions, Web Services are going to continue to be fragmented and disjointed.
Anyway, overall it was a pretty good conference, so I'd encourage you to check out more than our paper.
Update to WS-Context interoperability live endpoint
Here I gave the URL for our WS-Context interoperability endpoint. I'm working with one of my colleagues from Oracle on the WS-CAF TC to get the interoperability documents made public. In the meanwhile, our Retailer is available at http://services.arjuna.com:8080/jboss-net/services/Retailer and the configuration for the shopping cart is at http://services.arjuna.com:8080/wscafdemo/config.jsp.
Wednesday, December 01, 2004
Transaction interoperability: myth or reality?
I've been reading some interesting posts on a few mailing lists about transaction interoperability and whether a) it's possible, b) desirable, and c) how to achieve it. This is an issue that's pretty close to my heart because I've been working with various people over the years, such as Eric, Ian and Tom, to accomplish it in one environment or another. I hope what follows is an objective discussion.
The short answer to a) is "yes" it's possible. The longer answer is "it often takes a lot of effort to do it, and sometimes you have to go through more hoops than you really should". Historically transaction interoperability has been a kind of Holy Grail, because the likes of CICS, Tuxedo and DEC ACMS were backed by companies who had the muscle to push the homogeneous software pattern: one implementation throughout the organisation. The obvious result of this was vendor lock-in, but the knock on effect was that if you really really really needed interoperability then you'd pay one or more of these companies to tailor a solution for you. A win-win scenario for them, but not particularly attractive to the customer.
As a result, there have been several efforts to get interoperability over the years, the most notable probably being the CORBA Object Transaction Service (OTS). Unfortunately, the early versions of the OTS (prior to 1.2) suffered from the general non-interoperability of CORBA implementations (don't get me started on the BOA!) which meant that even if you had the same OTS implementation running at both sides of a conversation, unless it was running on the same CORBA ORB, interoperability was once again tricky to achieve. Fortunately the likes of Steve, Michi and others helped to massage CORBA into an interoperable distributed system (almost a decade after the OMG was first established), and OTS 1.2 and beyond became more and more interoperable. Unfortunately for one reason or another (the BOA being one of them), the OTS take up was slower than originally imagined and even today interoperability at this level isn't great.
However, that's where Web Services can help. I've said this before, but I think it's important enough to say again: Web Services are as much about interoperability as they are about Internet scale computing. We're seeing a lot of pull for them in the interoperability space. Fortunately the Web Services transactions specifications from OASIS and IBSoft provide an Atomic Transaction model that is designed specifically for interoperability (I'm biased here, but I'd say that the one in WS-TXM is better for this). So, interoperability at this level has become a reality and we're starting to see implementations of different underlying transaction services talking to each other!
OK, so some of the above is actually an answer to c). But back to b) and I think this is a no-brainer in today's world of cost-cutting and company mergers; the notion of a single implementation of anything, be it database, application server or transaction service, is simply no longer true (if it ever truly was). Companies that grow through acquisitions typically tend not to be able to have edicts about scrapping existing infrastructural investments in favour of the current software vendor of the day. So, interoperability within the organisation is happening as a fact of life today. Interoperability across organisations (even at the level of traditional ACID transactions) will happen, but it'll be less prevelant (simply because the ACID transaction model doesn't really work in that world).
That's not to say that all transaction services need to support interoperability. There are bound to be environments where interoperability is not needed by default and where extra work may be necessary when and if it ever is needed. But I think these are niche cases and should be carefully examined before going ahead.
The short answer to a) is "yes" it's possible. The longer answer is "it often takes a lot of effort to do it, and sometimes you have to go through more hoops than you really should". Historically transaction interoperability has been a kind of Holy Grail, because the likes of CICS, Tuxedo and DEC ACMS were backed by companies who had the muscle to push the homogeneous software pattern: one implementation throughout the organisation. The obvious result of this was vendor lock-in, but the knock on effect was that if you really really really needed interoperability then you'd pay one or more of these companies to tailor a solution for you. A win-win scenario for them, but not particularly attractive to the customer.
As a result, there have been several efforts to get interoperability over the years, the most notable probably being the CORBA Object Transaction Service (OTS). Unfortunately, the early versions of the OTS (prior to 1.2) suffered from the general non-interoperability of CORBA implementations (don't get me started on the BOA!) which meant that even if you had the same OTS implementation running at both sides of a conversation, unless it was running on the same CORBA ORB, interoperability was once again tricky to achieve. Fortunately the likes of Steve, Michi and others helped to massage CORBA into an interoperable distributed system (almost a decade after the OMG was first established), and OTS 1.2 and beyond became more and more interoperable. Unfortunately for one reason or another (the BOA being one of them), the OTS take up was slower than originally imagined and even today interoperability at this level isn't great.
However, that's where Web Services can help. I've said this before, but I think it's important enough to say again: Web Services are as much about interoperability as they are about Internet scale computing. We're seeing a lot of pull for them in the interoperability space. Fortunately the Web Services transactions specifications from OASIS and IBSoft provide an Atomic Transaction model that is designed specifically for interoperability (I'm biased here, but I'd say that the one in WS-TXM is better for this). So, interoperability at this level has become a reality and we're starting to see implementations of different underlying transaction services talking to each other!
OK, so some of the above is actually an answer to c). But back to b) and I think this is a no-brainer in today's world of cost-cutting and company mergers; the notion of a single implementation of anything, be it database, application server or transaction service, is simply no longer true (if it ever truly was). Companies that grow through acquisitions typically tend not to be able to have edicts about scrapping existing infrastructural investments in favour of the current software vendor of the day. So, interoperability within the organisation is happening as a fact of life today. Interoperability across organisations (even at the level of traditional ACID transactions) will happen, but it'll be less prevelant (simply because the ACID transaction model doesn't really work in that world).
That's not to say that all transaction services need to support interoperability. There are bound to be environments where interoperability is not needed by default and where extra work may be necessary when and if it ever is needed. But I think these are niche cases and should be carefully examined before going ahead.
Wednesday, November 24, 2004
WS-Context interoperability endpoint
We've decided to put up a public endpoint for the WS-Context interoperability demonstrator at http://services.arjuna.com:8080/wscafdemo. In order to use it you'll need to check out the TC use case document, which for some reason isn't publicly available off the WS-CAF TC homepage. I'll post an update later if it turns out we (the TC) can make this available. Thanks go to my colleagues Malik Saheb and Kevin Conner for making this possible.
The Activity Service
Recently I wrote an article with Bruce Martin about the J2EE Activity Service. In essence, what we show is that ACID transactions aren't sufficient for everything and there are a number of extended transaction models that allow for the controlled relaxation of the ACID properties. What this means is that one size doesn't fit all and as usual it is necessary to use the right tool for the right job. Anyone who's worked in the area of (distributed) transactions for a while will typically have a story or two to tell about where ACID transactions were shoe-horned into areas they simply weren't suitable for.
Therefore, rather than provide support for a single model, we did work with IBM, IONA Technologies, Bank of America, Alcatel and others in the OMG on the Additional Structuring Mechanisms for the OTS. This defines an infrastructure to support a wide range of extended transaction models. The architecture is based on the insight that the various extended transaction models can be supported by providing a general purpose event signaling mechanism that can be programmed to enable activities (application specific units of computations) to coordinate each other in a manner prescribed by the extended transaction model under consideration.
Once this was adopted within the OMG, IBM, ourselves and others moved to push this into J2EE via JSR 95: the J2EE Activity Service. This has recently been finalised and presumably will become part of some future version of J2EE.
Now whether or not this work will actually change the way people develop transactional applications in J2EE is open to debate. I know that when we were doing the work it seemed "obvious" that this was the future of transactions and alongside IBM we've had some interesting development experiences. But the world has moved on since 1997 and I often have interesting discussions with Greg and others as to whether the Activity Service is of much use. Only time will tell.
However, what cannot be in doubt is the influence this work has had elsewhere. Since the original OMG work was released, it has influenced the WS-Coordination specification from IBM, Microsoft and BEA, as well as the WS-Coordination Framework from ourselves, IONA, Oracle, Sun and Fujitsu. It also formed the basis of our initial submission to the OASIS BTP technical committee back in 2001; though its influence after that point was mimimal.
In some ways the new OASIS WS-CAF architecture, which factored context our of the original Activity Service, is a purer form of the model we were trying to achieve back then.
Therefore, rather than provide support for a single model, we did work with IBM, IONA Technologies, Bank of America, Alcatel and others in the OMG on the Additional Structuring Mechanisms for the OTS. This defines an infrastructure to support a wide range of extended transaction models. The architecture is based on the insight that the various extended transaction models can be supported by providing a general purpose event signaling mechanism that can be programmed to enable activities (application specific units of computations) to coordinate each other in a manner prescribed by the extended transaction model under consideration.
Once this was adopted within the OMG, IBM, ourselves and others moved to push this into J2EE via JSR 95: the J2EE Activity Service. This has recently been finalised and presumably will become part of some future version of J2EE.
Now whether or not this work will actually change the way people develop transactional applications in J2EE is open to debate. I know that when we were doing the work it seemed "obvious" that this was the future of transactions and alongside IBM we've had some interesting development experiences. But the world has moved on since 1997 and I often have interesting discussions with Greg and others as to whether the Activity Service is of much use. Only time will tell.
However, what cannot be in doubt is the influence this work has had elsewhere. Since the original OMG work was released, it has influenced the WS-Coordination specification from IBM, Microsoft and BEA, as well as the WS-Coordination Framework from ourselves, IONA, Oracle, Sun and Fujitsu. It also formed the basis of our initial submission to the OASIS BTP technical committee back in 2001; though its influence after that point was mimimal.
In some ways the new OASIS WS-CAF architecture, which factored context our of the original Activity Service, is a purer form of the model we were trying to achieve back then.
Saturday, November 20, 2004
Better late than never
WS-Context has gone to committee draft in OASIS WS-CAF, which is somewhat of a relief, as it means we can move on to the WS-Coordination Framework specification.
As I've said in an earlier post, WS-Context took me by surprise with its applicability in a wide area, despite the fact that it is probably conceptually the simplest of the 3 specifications we're looking at in the technical committee. IMO one of the most important areas where WS-Context can assist is in maintaining a scalable Web Services architecture for statefull Web Services. I agree with Jim Gray that truly stateless Web Services aren't of any real use, and hope to have a paper I've written on the subject published soon. State is a necessity, it's just what constitutes state that may be open for debate.
In this post I'll try to give a flavour of how I think WS-Context fits into this debate. The area I'll consider is that of stateful sessions, where a user wants to return to a specific state time and again over the course of hours, days, weeks etc. Now in Web Services there are essentially two proposed ways of doing this:
(i) WS-Addressing, which uses ReferenceProperties to explicitly bind a Web Services address to a specific session (and hence state).
(ii) WS-Context, which provides a more lightweight, generalized session model. One analogy that you could use, is that of a Web Services cookie, that isn't tied to a single Web server: I'm glad we didn't call the specificatoin WS-Cookie though.
Both models support stateful interactions over a period of time. The problem is that (i) encourages tightly coupled system design: the address is only good for a specific session, and it is tied to a single endpoint. If you've only got to manage a small number of these addresses, then you obviously could, but if the session spans hours or days then you're going to want to make them durable somehow. That's not necessarily a hardship, unless you have many of them.
However, if a client application interacts with multiple services within the same logical session, then it is often the case that the state of a service has relevance to the client only when used in conjunction with the associated states of the other services. This necessarily means that the client must remember each service reference and somehow associate them with a specific interaction; multiple interactions will obviously result in different reference sets that may be combined to represent each sessions.
For example, if there are N services used within the same application session, each maintaining m different states, the client application will have to maintain N*m reference endpoints. This obviously does not scale. However, an alternative approach is to use (ii). Each interaction with a set of services can be modeled as a session and this in turn can be modeled as a WS-Context activity with an associated context (at a minimum essentially just a URI). Whenever a client application interacts with a set of services within the same session, the same context (same URI) is propagated to the services and they map this context to the necessary states that the client interaction requires. How this mapping occurs is an implementation specific choice that doesn't need to be exposed to the client. Furthermore, since each service within a specific session gets the same context, upon later revisiting these services and providing the same context again, the client application can be sure to return to a consistent set of states. You only need to remember the context, no matter how many services your application uses in that activity. Thus, this model scales much better.
The Web Services architecture is not prescriptive about what happens behind service endpoints. This gives flexibility of implementation, allowing systems to adapt to changes in requirements, technology etc. without directly affecting users. It also means that issues such as whether or not a service maintains state on behalf of users or their (temporally bounded) interactions, has been an implementation choice not typically exposed to users. The WS-Context session model encourages loose coupling of services and their users and keeps any implementation specific choices about state where they belong: behind the service endpoint.
As I've said in an earlier post, WS-Context took me by surprise with its applicability in a wide area, despite the fact that it is probably conceptually the simplest of the 3 specifications we're looking at in the technical committee. IMO one of the most important areas where WS-Context can assist is in maintaining a scalable Web Services architecture for statefull Web Services. I agree with Jim Gray that truly stateless Web Services aren't of any real use, and hope to have a paper I've written on the subject published soon. State is a necessity, it's just what constitutes state that may be open for debate.
In this post I'll try to give a flavour of how I think WS-Context fits into this debate. The area I'll consider is that of stateful sessions, where a user wants to return to a specific state time and again over the course of hours, days, weeks etc. Now in Web Services there are essentially two proposed ways of doing this:
(i) WS-Addressing, which uses ReferenceProperties to explicitly bind a Web Services address to a specific session (and hence state).
(ii) WS-Context, which provides a more lightweight, generalized session model. One analogy that you could use, is that of a Web Services cookie, that isn't tied to a single Web server: I'm glad we didn't call the specificatoin WS-Cookie though.
Both models support stateful interactions over a period of time. The problem is that (i) encourages tightly coupled system design: the address is only good for a specific session, and it is tied to a single endpoint. If you've only got to manage a small number of these addresses, then you obviously could, but if the session spans hours or days then you're going to want to make them durable somehow. That's not necessarily a hardship, unless you have many of them.
However, if a client application interacts with multiple services within the same logical session, then it is often the case that the state of a service has relevance to the client only when used in conjunction with the associated states of the other services. This necessarily means that the client must remember each service reference and somehow associate them with a specific interaction; multiple interactions will obviously result in different reference sets that may be combined to represent each sessions.
For example, if there are N services used within the same application session, each maintaining m different states, the client application will have to maintain N*m reference endpoints. This obviously does not scale. However, an alternative approach is to use (ii). Each interaction with a set of services can be modeled as a session and this in turn can be modeled as a WS-Context activity with an associated context (at a minimum essentially just a URI). Whenever a client application interacts with a set of services within the same session, the same context (same URI) is propagated to the services and they map this context to the necessary states that the client interaction requires. How this mapping occurs is an implementation specific choice that doesn't need to be exposed to the client. Furthermore, since each service within a specific session gets the same context, upon later revisiting these services and providing the same context again, the client application can be sure to return to a consistent set of states. You only need to remember the context, no matter how many services your application uses in that activity. Thus, this model scales much better.
The Web Services architecture is not prescriptive about what happens behind service endpoints. This gives flexibility of implementation, allowing systems to adapt to changes in requirements, technology etc. without directly affecting users. It also means that issues such as whether or not a service maintains state on behalf of users or their (temporally bounded) interactions, has been an implementation choice not typically exposed to users. The WS-Context session model encourages loose coupling of services and their users and keeps any implementation specific choices about state where they belong: behind the service endpoint.
Thursday, November 18, 2004
Java on the slide?
I came across this a few years ago and it's interesting to monitor periodically. I see that Java has taken over an 8% dive recently and is now overtaken by C. Assuming their measurement metrics are right, then if you check out the graph there's been a downward trend for Java since 2001. I don't think this is surprising or necessarily the deathknell for the language. There was a lot of hype for Java 4+ years back and that will have encouraged people to try it when it simply isn't suitable for all use cases. Now I suspect it's just starting to find its right level.
I love Java and have used it since it was calledOak: my first program in it was a game where you could shoot off bits of Duke in a fairly gorey manner. However, my favourite programming language still has to be C++.
I love Java and have used it since it was calledOak: my first program in it was a game where you could shoot off bits of Duke in a fairly gorey manner. However, my favourite programming language still has to be C++.
Web Services context, coordination and transactions
Well Doug Bunting and I gave our presentation on the state of the Web Services architecture and how OASIS WS-CAF could be used to fill in some of the gaps. I thought it went OK and we had a few good questions from the audience and more offline. Unfortunately the proceedings from the conference aren't publicly available, so I'll have to wait until I get back to the office before I put a copy on our Web site.
It's amazing how these things pan out, but WS-Context was always the specification I assumed would interest people the least. However, and with the benefit of 20-20 hindsight, because it probably has the most relevance, it is the one that everyone latches onto most often. It's always nice to have people come up and give you new use cases for how context (and the notion of activity) fit nicely into their architecture. It kind of reinforces that the work we're doing isn't just useful in a narrow field.
Fairly obviously though, I think that the 3 specifications that form WS-CAF (WS-Coordination Framework and WS-Transaction Management being the other two) create a "pyramid of interest", tapering to transactions at the top. Put another way, the further up the stack we go, the less likely people are to find them generally useful. Don't get me wrong though, that's not necessarily a bad thing.
It's amazing how these things pan out, but WS-Context was always the specification I assumed would interest people the least. However, and with the benefit of 20-20 hindsight, because it probably has the most relevance, it is the one that everyone latches onto most often. It's always nice to have people come up and give you new use cases for how context (and the notion of activity) fit nicely into their architecture. It kind of reinforces that the work we're doing isn't just useful in a narrow field.
Fairly obviously though, I think that the 3 specifications that form WS-CAF (WS-Coordination Framework and WS-Transaction Management being the other two) create a "pyramid of interest", tapering to transactions at the top. Put another way, the further up the stack we go, the less likely people are to find them generally useful. Don't get me wrong though, that's not necessarily a bad thing.
Arjuna Technologies blog
At the moment Arjuna Technologies doesn't have support for blogs, but hopefully that'll change shortly. Watch this space.
I'd encourage anyone interesting in reliable middleware for J2EE or Web Services to check out Arjuna. We've got some pretty cool products and a collection of bright people behind them. Despite the fact that Hewlett-Packard decided to drop out of the middleware business, these guys stuck together and did a great job of pushing the envelope.
I'd encourage anyone interesting in reliable middleware for J2EE or Web Services to check out Arjuna. We've got some pretty cool products and a collection of bright people behind them. Despite the fact that Hewlett-Packard decided to drop out of the middleware business, these guys stuck together and did a great job of pushing the envelope.
WWW 2005 update
Myself and Dan Suciu (Washington State University) just made our first pass over the papers for our Web Services track. Looks like everyone will have about 10 papers :-(
The rough breakdown on subject area seems to be:
Practice and experience: 10%
Formal methods: 5%
Query and searching: 20%
XPath: 5%
XML: 25%
Fault tolerance: 5%
Workflow: 10%
Principles of Web Services: 15%
Miscellaneous: 5%
I need to think a bit about this to see if there's any interesting conclusions to come out of the distribution (hey, it's 8am here in DC and I've been up since 3am).
The rough breakdown on subject area seems to be:
Practice and experience: 10%
Formal methods: 5%
Query and searching: 20%
XPath: 5%
XML: 25%
Fault tolerance: 5%
Workflow: 10%
Principles of Web Services: 15%
Miscellaneous: 5%
I need to think a bit about this to see if there's any interesting conclusions to come out of the distribution (hey, it's 8am here in DC and I've been up since 3am).
Gartner mentions WS-CAF
I was speaking to Eric last night. He's been attending the Gartner conference/summit on Web Services in Florida. (Probably better weather than we're having here in DC.)
He told me that Gartner said that there were only 3 technical committees in OASIS at the moment that are doing anything worth watching, and one of those is WS-CAF.
I wonder what the other two are ;-) ?
He told me that Gartner said that there were only 3 technical committees in OASIS at the moment that are doing anything worth watching, and one of those is WS-CAF.
I wonder what the other two are ;-) ?
Wednesday, November 17, 2004
Searching for White Dwarfs
Savas, Jim and others have been doing some nice work on combining Grid and Web Services, called WS-GAF. Check it out if you want to see how it is possible to implement Grids using today's Web Services and still maintain the SOA principles.
One of the applications that they are using to exemplify their techniques is based on searching for White Dwarf stars. I spent over a third of my university undergraduate degree doing astrophysics, so I find this really interesting. Hey, maybe this application could be developed into the next generation Seti@home.
One of the applications that they are using to exemplify their techniques is based on searching for White Dwarf stars. I spent over a third of my university undergraduate degree doing astrophysics, so I find this really interesting. Hey, maybe this application could be developed into the next generation Seti@home.
More interesting reads
If you're looking for interesting things to read, then I'd recommend checking out the blogs of Jim and Savas. They're friends and ex-work colleagues, which whom I may not always agree but have always had interesting discussions.
I don't think I've ever had a conversation with them about work that I came away from thinking it was a waste of time. Other topics, such as Savas' tendency to believe that the Greek nation invented everything first, is another matter ;-)
We've written quite a few articles and papers together over the years and hopefully that'll continue.
I don't think I've ever had a conversation with them about work that I came away from thinking it was a waste of time. Other topics, such as Savas' tendency to believe that the Greek nation invented everything first, is another matter ;-)
We've written quite a few articles and papers together over the years and hopefully that'll continue.
A Washington Monument
When I first came to DC many years ago for a conference in Annapolis, I came with my friend Stuart and we spent a few days looking around. It was great and we saw all of the usual monuments and sights. At the time one of Stuart's friends who lived here gave us a tour (tip: get a local to show you round, you see more than in the guide books) and took us to see a monument/sculpture/work-of-art that I'd like to find again, but just don't know where to look. It was a giant hand coming up from the ground, near the ocean. Very cool looking. Must take a look through some of the local guide books.
WWW 2005
I'm a co-chair of the Web Services track for WWW2005, which is going to be in Japan. I doubt I'll be able to attend the actual conference, which is a shame since I've heard interesting things about Japan from friends like Jim and Lindsay.
Regardless, it's been nice to be co-chair, until the submitted papers came flooding in. Looks like all of us on the Web Service track will be looking at about a dozen different papers to review over Christmas. Oh well, I suppose it beats watching the Queen's speach!
Regardless, it's been nice to be co-chair, until the submitted papers came flooding in. Looks like all of us on the Web Service track will be looking at about a dozen different papers to review over Christmas. Oh well, I suppose it beats watching the Queen's speach!
That was easy!
As anyone who knows me can attest, I've been saying for years that I won't blog, because I didn't see the point. So I feel a bit strange doing this. To be honest, I felt forced into it ;-) Over the past few years I've watched friends like Greg Pavlik, Eric Newcomer, Steve Vinoski and Mark Potts all start blogging (and that is by no means a complete list), so there must be something in it! And when I found myself reading their blogs as well as others and finding them interesting, something clicked.
So here we are. I'm not saying that what I'll be writing will be interesting or that you'll want to read it, but at least it'll be fun - I hope.
So here we are. I'm not saying that what I'll be writing will be interesting or that you'll want to read it, but at least it'll be fun - I hope.
WS-Context and XML 2004
I'm here at XML 2004 in Washington DC for a couple of reasons. The first is that we're doing an interoperability demonstration with Oracle and IONA Technologies of our respective WS-Context implementations, just to prove that it is an interoperable spec. and (hopefully) soon-to-be standard.
This is cool, because I've been working on this notion of context for the best part of a decade, starting with some nice work I did with my friend Stuart Wheater and friends/colleagues at IBM Hursely (Ian Robinson, Tom Freund, Tony Storey and Iain Houston to name a few) and IONA Technologies (Eric Newcomer) on the OMG Activity Service (aka Additional Structuring Mechanisms for the OTS specification - bit of a mouthful!) and moving through J2EE to Web Services. So it's nice to see the fruits of your work, so to speak. However, it's morphed quite a bit over the years, particularly over the past couple. And it's nice to see a wider adoption (and constructive input) by companies such as Oracle.
It's also nice to see more and more specifications coming out that use context. WS-Enumeration, WS-Security, WS-Reliability to name a few. Now if only we could encourage these guys to use a standard context format rather than developing ad hoc solutions.
Although Web Services are as much about interoperability as they are the Web, it's interesting to note that interoperability isn't something that's a given for Web Services specifications. It's like anything: unless you design with it in mind from the outset, you can quickly end up with a closed system. For example, without naming names, there are several WS specifications out there that define extensibility elements (##other to you and me). Now that isn't necessarily a bad thing (hey, we do the same in WS-Context). It becomes a bad thing when either the specification requires you to use them (and doesn't provide concrete rules for the contents) or implementations require them to be filled with vendor specific information (and again, often don't externally publish this information for other vendors to use). IMO this is an easy route back to vendor lock-in.
We had the same issue back in the OMG OTS days, where the transaction context defines an extensibility element (a CORBA any). The intention was that an implementation could fill this with whatever information it needed to provide optimizations for itself (e.g., build up a subordinate transaction hierarchy in a downstream node when a request is made on a service on that node and then pass it back up to the parent when the response is delivered, rather than call back to the parent as you build up the hierarchy before executing the request), but if that information wasn't present then heterogeneous implementations could still interact, albeit in a possibly less efficient manner. However, it quickly got hijacked and implementations required the CORBA::any to be present and filled with specific information in order for them to work. Quite nice it you want to prevent interoperability and encourage lock-in.
Anyway, I digress. The WS-Context interoperability demo hosted by OASIS went well. We used two local implementations (Arjuna's running on my laptop and IONA's running on their own laptop) and a remote on (Oracle). It'd be nice to say that that's the configuration we always intended, because it shows us talking form DC to Philadelphia, where the Oracle endpoint is located, but I'd be lying. The reason we did this was that poor Simeon (Green) from Oracle had his laptop die on him an hour before the demonstration! Sods law!!
Tomorrow Doug Bunting (Sun) and I are presenting on WS-CAF and how it fills some of the gaps in the current Web Services architecture. Hopefully that will go as well as the interop. demo.
This is cool, because I've been working on this notion of context for the best part of a decade, starting with some nice work I did with my friend Stuart Wheater and friends/colleagues at IBM Hursely (Ian Robinson, Tom Freund, Tony Storey and Iain Houston to name a few) and IONA Technologies (Eric Newcomer) on the OMG Activity Service (aka Additional Structuring Mechanisms for the OTS specification - bit of a mouthful!) and moving through J2EE to Web Services. So it's nice to see the fruits of your work, so to speak. However, it's morphed quite a bit over the years, particularly over the past couple. And it's nice to see a wider adoption (and constructive input) by companies such as Oracle.
It's also nice to see more and more specifications coming out that use context. WS-Enumeration, WS-Security, WS-Reliability to name a few. Now if only we could encourage these guys to use a standard context format rather than developing ad hoc solutions.
Although Web Services are as much about interoperability as they are the Web, it's interesting to note that interoperability isn't something that's a given for Web Services specifications. It's like anything: unless you design with it in mind from the outset, you can quickly end up with a closed system. For example, without naming names, there are several WS specifications out there that define extensibility elements (##other to you and me). Now that isn't necessarily a bad thing (hey, we do the same in WS-Context). It becomes a bad thing when either the specification requires you to use them (and doesn't provide concrete rules for the contents) or implementations require them to be filled with vendor specific information (and again, often don't externally publish this information for other vendors to use). IMO this is an easy route back to vendor lock-in.
We had the same issue back in the OMG OTS days, where the transaction context defines an extensibility element (a CORBA any). The intention was that an implementation could fill this with whatever information it needed to provide optimizations for itself (e.g., build up a subordinate transaction hierarchy in a downstream node when a request is made on a service on that node and then pass it back up to the parent when the response is delivered, rather than call back to the parent as you build up the hierarchy before executing the request), but if that information wasn't present then heterogeneous implementations could still interact, albeit in a possibly less efficient manner. However, it quickly got hijacked and implementations required the CORBA::any to be present and filled with specific information in order for them to work. Quite nice it you want to prevent interoperability and encourage lock-in.
Anyway, I digress. The WS-Context interoperability demo hosted by OASIS went well. We used two local implementations (Arjuna's running on my laptop and IONA's running on their own laptop) and a remote on (Oracle). It'd be nice to say that that's the configuration we always intended, because it shows us talking form DC to Philadelphia, where the Oracle endpoint is located, but I'd be lying. The reason we did this was that poor Simeon (Green) from Oracle had his laptop die on him an hour before the demonstration! Sods law!!
Tomorrow Doug Bunting (Sun) and I are presenting on WS-CAF and how it fills some of the gaps in the current Web Services architecture. Hopefully that will go as well as the interop. demo.