Well the CFP for JavaOne 2005 is out. A quick perusal over what they want this year shows that maybe they've started to listen to the attendees. I've been going to JavaOne for quite a few years and have presented at a couple. However, over the years I've noticed the attendance level drop and my own opinion is that there are a few reasons for this:
(i) at the start, it was almost impossible to get a presentation accepted unless you were a Sun employee or at least that's how it seemed. I remember looking through the agendas for early JavaOne's and playing "spot the non-Sun employee" - it was hard.
(ii) following on from (i), I think it became hard for people to get information that was truly for the community and from the community. OK, Sun brought us Java, but as a percentage of the people who actually use and develop the technology, are they really that influential? So, the early JavaOne's were hype-shows, with not a lot of content. I remember one friend commenting to me after one early show "This conference intentionally left blank." and he was pretty much on the mark. I suspect this kind of comment wasn't unique and had knock-on effects.
(iii) JavaOne became a "show" and not a conference/workshop. Even the BOFs started to suffer. Unfortunately the marketing still pushed "conference". Now when I attend a conference I have certain expectations for the types of presentation and their technical detail. Definitely a different expectation for a "show".
So it looks like Sun have finally started to listen to the people who attend these conferences and turn the tide away from a "show". Let's hope it works because JavaOne should be the place to go if you're into Java.
Thursday, December 23, 2004
Semantic Web
So, I've been doing some of my paper reviews for WWW 2005. Some interesting stuff. More than 50% of the papers I've reviewed so far have been on the Semantic Web. Now I haven't had too much to do with that technically, but I've seen enough discussions elsewhere to know that some people see this as a fundamental building block for Web Services, whereas others see it more as an application that lives on Web Services. I have to say I fall into the latter camp, but if there are some good pointers to the former, let me know.
Saturday, December 18, 2004
Down time
So Christmas is getting near and apparently the goose is getting fat (but I ain't putting a penny in the old man's hat!) My last official day at work this year was Friday 17th, but in reality I'll be working over the "vacation", as anyone who knows me can attest. I see this more as "working from home" than "holiday", but them's the realities.
I've made a promise to my family to do no work on Christmas Day and New Year's Day, which since I've had kids isn't hard to accomplish - lots to keep me busy: lending moral support to the turkey as it gets cooked, eating the turkey afterwards and playing with the kids. Very different to the mid-90's, when I remember writing the first version of the Arjuna Transaction Service over Christmas Eve/Christmas Day (did a lot of its testing while watching The Lady in White, strangely enough - excellent film). Back then it was called JavaArjuna and was in use in the University before there even was a JTS or JTA specification from Sun. Hence the tag-line we still use today: that it was the worlds first 100% pure Java transaction service implementation. It looks a lot different today, thanks to the efforts of everyone over the years here and all of our other incarnations.
Anyway, this break is already looking busy, what with WWW 2005 papers to review (4 down and another 14 to go), the usual plethora of work commitments, updates to WS-CF with Eric and Greg and the odd teleconference. Oh, and a blog entry now and then. Believe it or not, I'm kind of looking forward to the festive season.
I've made a promise to my family to do no work on Christmas Day and New Year's Day, which since I've had kids isn't hard to accomplish - lots to keep me busy: lending moral support to the turkey as it gets cooked, eating the turkey afterwards and playing with the kids. Very different to the mid-90's, when I remember writing the first version of the Arjuna Transaction Service over Christmas Eve/Christmas Day (did a lot of its testing while watching The Lady in White, strangely enough - excellent film). Back then it was called JavaArjuna and was in use in the University before there even was a JTS or JTA specification from Sun. Hence the tag-line we still use today: that it was the worlds first 100% pure Java transaction service implementation. It looks a lot different today, thanks to the efforts of everyone over the years here and all of our other incarnations.
Anyway, this break is already looking busy, what with WWW 2005 papers to review (4 down and another 14 to go), the usual plethora of work commitments, updates to WS-CF with Eric and Greg and the odd teleconference. Oh, and a blog entry now and then. Believe it or not, I'm kind of looking forward to the festive season.
What next in the weird and wacky world of Web Services?
So we've got SOAP, UDDI, WS-Addressing (almost), transactions (one flavour or another), coordination (two), workflow (BPEL and CDL), WS-Context, security, and a host of other specifications (not many standards yet). Question is: what's next? Life-cycle is an obvious one, though there are more ways of doing it wrong for SOA/Web Services than of doing it right, so I'll hold my breath for now. Persistence? Surely a back-end implementation choice. Trading? Maybe - no, it's not the same as UDDI. QoS - Policy should cover that.
Maybe we're reaching a plateau and the plethora of specifications will be subject to Darwinian evolution. One can only hope.
If you've any ideas about the sorts of things you like to see (or really hate to see), drop me a comment.
Maybe we're reaching a plateau and the plethora of specifications will be subject to Darwinian evolution. One can only hope.
If you've any ideas about the sorts of things you like to see (or really hate to see), drop me a comment.
Friday, December 17, 2004
Implementations and specifications
In his blog, Dave argues that implementation details that creep into specifications/standards aren't a bad thing. He's right in that implementation experience needs to play an important part in the development of specifications and standards. This isn't anything new of course. You only have to look at the OMG, the JCP, the Open Group and a host of other groups/efforts over many years to see where this is true.
However, where Dave is wrong is in his implicit subtext, that the WS-Addressing specification should be taken as is by the working group because it is based on implementations. He misses the point entirely. Just look back at the small section of examples where I said implementations+specifications work well. These are all based on collaborative input from a wide range of vendors (and academics). What Dave seems to assume is that because IBM, BEA and MSFT have implemented WS-Addressing and then submitted it for standarization, that process should assume that those guys know best.
When we submitted the original WS-CAF work to OASIS to form the OASIS WS-CAF Technical Committee I suppose we could have said "rubber stamp this effort". But that's hardly an open all inclusive process, now is it? Our original work was simply a starting point and we were aiming to get more people and experiences involved. The results have been some pretty radical changes from what we originally submitted, but sobeit: that's the way the community as a whole have decided to move.
Now Dave keeps banging on about how addressing needs to be out there quickly, so we need to fast-track the work. Fair enough. But that doesn't mean that objections on aspects on the specification from companies and individuals who weren't involved in it originally should be ignored, or classified as unimportant. All the ones I've seen so far have been based on implementation experience too!
However, where Dave is wrong is in his implicit subtext, that the WS-Addressing specification should be taken as is by the working group because it is based on implementations. He misses the point entirely. Just look back at the small section of examples where I said implementations+specifications work well. These are all based on collaborative input from a wide range of vendors (and academics). What Dave seems to assume is that because IBM, BEA and MSFT have implemented WS-Addressing and then submitted it for standarization, that process should assume that those guys know best.
When we submitted the original WS-CAF work to OASIS to form the OASIS WS-CAF Technical Committee I suppose we could have said "rubber stamp this effort". But that's hardly an open all inclusive process, now is it? Our original work was simply a starting point and we were aiming to get more people and experiences involved. The results have been some pretty radical changes from what we originally submitted, but sobeit: that's the way the community as a whole have decided to move.
Now Dave keeps banging on about how addressing needs to be out there quickly, so we need to fast-track the work. Fair enough. But that doesn't mean that objections on aspects on the specification from companies and individuals who weren't involved in it originally should be ignored, or classified as unimportant. All the ones I've seen so far have been based on implementation experience too!
Thursday, December 16, 2004
I'm an exe
I just took the test here and it turns out my file extension is exe. I'm not sure if that's good or bad. May take the test again later.
When is a door not a door? When it's a jar!
So when is an optional feature not an optional feature? When it's required.
OK, that's not as funny as the title (was that funny?), but it's not meant to be. We're doing some work with some other vendors in the Web Services space (names withheld to protect the innocent) and this requires us to use a couple of Web Services specifications that have various optional fields defined in them. We have implementations and they have implementations and we need to talk to one another. Now, my reading of optional is:
(i) it doesn't have to be there;
(ii) if it is present, then you can use it (maybe there are preference rules associated with when you can or should use it);
(iii) if it isn't there, then you should be able to deal with that.
Certainly the specifications I'm referring to here make it clear what optionality means and it's covered in those 3 rules.
Unfortunately the implementation of the foreign system we're working with takes a different approach. In some cases:
(a) if the optional field is present, then it assumes the message is invalid;
(b) if the optional field isn't present, then it assumes the message is invalid.
Oh, and neither (a) nor (b) are publicized anywhere. Very annoying. Now I'm not entirely against an implementation requirement for something that is optional. But if such a thing exists, it should be mentioned somewhere.
Now here's the punchline: we're doing this for interoperability! Laugh? I nearly cried!
OK, that's not as funny as the title (was that funny?), but it's not meant to be. We're doing some work with some other vendors in the Web Services space (names withheld to protect the innocent) and this requires us to use a couple of Web Services specifications that have various optional fields defined in them. We have implementations and they have implementations and we need to talk to one another. Now, my reading of optional is:
(i) it doesn't have to be there;
(ii) if it is present, then you can use it (maybe there are preference rules associated with when you can or should use it);
(iii) if it isn't there, then you should be able to deal with that.
Certainly the specifications I'm referring to here make it clear what optionality means and it's covered in those 3 rules.
Unfortunately the implementation of the foreign system we're working with takes a different approach. In some cases:
(a) if the optional field is present, then it assumes the message is invalid;
(b) if the optional field isn't present, then it assumes the message is invalid.
Oh, and neither (a) nor (b) are publicized anywhere. Very annoying. Now I'm not entirely against an implementation requirement for something that is optional. But if such a thing exists, it should be mentioned somewhere.
Now here's the punchline: we're doing this for interoperability! Laugh? I nearly cried!
Wednesday, December 15, 2004
Eric Newcomer on WS-CAF and Web Services Transactions
Eric gives a really good overview of some of the work we've been doing over the past few years (more than I care to remember) on Web Services transactions. I'd encourage you to check it out.
I wrote an article with colleagues from IBM a while back which addressed some of Roger's themes.
I wrote an article with colleagues from IBM a while back which addressed some of Roger's themes.
Monday, December 13, 2004
Press release for WS-Context
I've mentioned a few times that we took part in an interoperability demonstrator at XML 2004 for WS-Context. Well we've just released a press release with our friends at IONA Technologies and Oracle.
Web Services transactions, entropy, heuristics and the information society
Imagine you walk into a bank and want to perform a transaction (banks are very useful things in transaction examples). That transaction involves you transferring money from one account (savings) to another (current). You obviously want this to happen with some kind of guarantee, so for the sake of this example let's assume we use an ACID transaction.
Now there's no such thing as a guarantee where physical media are concerned. The second law of thermodynamics states that entropy always increases and entropy is related to the level of chaos/disorder in the universe. Put simply, a less entropic system is more ordered and a more entropic system is more chaotic. I won't go into what the definitions of "order" and "chaos" are here, but another way of looking at this is to consider what happens when you buy an apple (the fruit, not the hardware!): it's fairly "ordered" in that the molecules that go to make it up are pretty much all "apple". However, if you leave it in the fruit bowl for too long it goes wrinkly and fuzzy with mould and eventually starts to decay entirely. (Kind of reminds me of some of the "experiments" we used to do in my undergraduate days to see how long unwashed plates would take to mould-over - though looking back I think they were really excuses for not washing up and nothing to do with physics experiments!)
Anyway, back to the apple. Over time, the molecules break down from the action of light, natural chemical reactions etc. The molecules form a host of other molecules and become less ordered, i.e., more entropy enters the system.
This is a very long winded way of saying that everything decays eventually. The same thing that happens to the apple happens to physical media. And statistics/probabilities say that even a new hard disk can fail on the first use.
So, in our bank example, despite the fact that we're using transactions and assuming that the transaction system is reliable, certain failures will always occur, given enough time and probabilities. The kinds of failure were interested in for this example are those that occur after the participants in the two-phase commit transaction have said they will do the work requested of them (transfer the money)i.e., during the second (commit) phase. So, the money has been moved out of the savings account (it's really gone) and is being added to the current account, when the disk hosting the
current account dies. Usually what this means is that we have a non-atomic outcome, or a heuristic outcome: the transaction coordinator has said commit, one participant (savings account) has said DONE, but the second one (current account) has said OOPS. There's no going back with the work the savings participant has done, so this transaction isn't going to be atomic (all or nothing).
Most enterprise transaction specifications and implementations allow for this via a heuristic error. This basically means that the transaction system can be informed (and hence can inform) that such an error has happened. There's not a lot that can be done automatically to fix these types of error. They often require semantic information about the application in order to restore consistency, so have to be handled by a system administrator. However, the important thing is that someone knows there's been a problem.
Imagine that this error happens and you don't know about it! Or at least don't know about it until the next time you check your account. Not good. Personally I'd like to know if there's been a screw-up as soon as possible. In our bank scenario, I can go and talk to someone in the branch. If I was doing this via the internet there's usually a number I can call to talk to someone (probably located in a different country these days ;-)
Now why is this important? Well, there are a few Web Services transactions specifications around that can be used in this scenario. BTP, WS-Atomic Transaction and WS-ACID Transaction. The first and last both allow for heuristic-like errors to be sent from participant to coordinator and from coordinator to end-user, whereas the second one (from IBM, Microsoft and BEA) doesn't. This seems like a strange omission, because errors do happen.
OK, it's not as bad as might first seem. Of course I can use WS-Atomic Transaction to communicate these errors. Unfortunately I just can't do it within the specification. I'd have to overload SOAP faults (for example), or maybe use some proprietary extension (repeat after me: vendor lock-in is not good). Not exactly good for interoperability and/or portability. The fact that protocols like WS-Atomic Transaction and WS-ACID Transaction are really meant for interoperability of existing transaction service implementations (e.g., Tuxedo-to-CICS, or ATS-to-Encina), where heuristics originated, makes this omission even more striking.
Oh well. Maybe failures don't happen. The 2nd law of thermodynamics does fall down if time flows backwards ;-)
Now there's no such thing as a guarantee where physical media are concerned. The second law of thermodynamics states that entropy always increases and entropy is related to the level of chaos/disorder in the universe. Put simply, a less entropic system is more ordered and a more entropic system is more chaotic. I won't go into what the definitions of "order" and "chaos" are here, but another way of looking at this is to consider what happens when you buy an apple (the fruit, not the hardware!): it's fairly "ordered" in that the molecules that go to make it up are pretty much all "apple". However, if you leave it in the fruit bowl for too long it goes wrinkly and fuzzy with mould and eventually starts to decay entirely. (Kind of reminds me of some of the "experiments" we used to do in my undergraduate days to see how long unwashed plates would take to mould-over - though looking back I think they were really excuses for not washing up and nothing to do with physics experiments!)
Anyway, back to the apple. Over time, the molecules break down from the action of light, natural chemical reactions etc. The molecules form a host of other molecules and become less ordered, i.e., more entropy enters the system.
This is a very long winded way of saying that everything decays eventually. The same thing that happens to the apple happens to physical media. And statistics/probabilities say that even a new hard disk can fail on the first use.
So, in our bank example, despite the fact that we're using transactions and assuming that the transaction system is reliable, certain failures will always occur, given enough time and probabilities. The kinds of failure were interested in for this example are those that occur after the participants in the two-phase commit transaction have said they will do the work requested of them (transfer the money)i.e., during the second (commit) phase. So, the money has been moved out of the savings account (it's really gone) and is being added to the current account, when the disk hosting the
current account dies. Usually what this means is that we have a non-atomic outcome, or a heuristic outcome: the transaction coordinator has said commit, one participant (savings account) has said DONE, but the second one (current account) has said OOPS. There's no going back with the work the savings participant has done, so this transaction isn't going to be atomic (all or nothing).
Most enterprise transaction specifications and implementations allow for this via a heuristic error. This basically means that the transaction system can be informed (and hence can inform) that such an error has happened. There's not a lot that can be done automatically to fix these types of error. They often require semantic information about the application in order to restore consistency, so have to be handled by a system administrator. However, the important thing is that someone knows there's been a problem.
Imagine that this error happens and you don't know about it! Or at least don't know about it until the next time you check your account. Not good. Personally I'd like to know if there's been a screw-up as soon as possible. In our bank scenario, I can go and talk to someone in the branch. If I was doing this via the internet there's usually a number I can call to talk to someone (probably located in a different country these days ;-)
Now why is this important? Well, there are a few Web Services transactions specifications around that can be used in this scenario. BTP, WS-Atomic Transaction and WS-ACID Transaction. The first and last both allow for heuristic-like errors to be sent from participant to coordinator and from coordinator to end-user, whereas the second one (from IBM, Microsoft and BEA) doesn't. This seems like a strange omission, because errors do happen.
OK, it's not as bad as might first seem. Of course I can use WS-Atomic Transaction to communicate these errors. Unfortunately I just can't do it within the specification. I'd have to overload SOAP faults (for example), or maybe use some proprietary extension (repeat after me: vendor lock-in is not good). Not exactly good for interoperability and/or portability. The fact that protocols like WS-Atomic Transaction and WS-ACID Transaction are really meant for interoperability of existing transaction service implementations (e.g., Tuxedo-to-CICS, or ATS-to-Encina), where heuristics originated, makes this omission even more striking.
Oh well. Maybe failures don't happen. The 2nd law of thermodynamics does fall down if time flows backwards ;-)
Monday, December 06, 2004
XML 2004 paper and presentation available
Well the people at IDEAlliance who hosted XML 2004 have put the papers and presentations online. (No idea why they are hosted at different places!)
The paper that Doug Bunting and I wrote on WS-CAF is available here and the presentation is here. As we said on the day, if you only walk away from the paper/presentation with one thing then I'd hope it is that there are some pretty important holes in the current Web Services architecture. WS-CAF presents one possible solution to some of these holes, but as with most things in computer science, there are other possible ways solutions. But it's the problem space that is important: without generally agreed upon solutions, Web Services are going to continue to be fragmented and disjointed.
Anyway, overall it was a pretty good conference, so I'd encourage you to check out more than our paper.
The paper that Doug Bunting and I wrote on WS-CAF is available here and the presentation is here. As we said on the day, if you only walk away from the paper/presentation with one thing then I'd hope it is that there are some pretty important holes in the current Web Services architecture. WS-CAF presents one possible solution to some of these holes, but as with most things in computer science, there are other possible ways solutions. But it's the problem space that is important: without generally agreed upon solutions, Web Services are going to continue to be fragmented and disjointed.
Anyway, overall it was a pretty good conference, so I'd encourage you to check out more than our paper.
Update to WS-Context interoperability live endpoint
Here I gave the URL for our WS-Context interoperability endpoint. I'm working with one of my colleagues from Oracle on the WS-CAF TC to get the interoperability documents made public. In the meanwhile, our Retailer is available at http://services.arjuna.com:8080/jboss-net/services/Retailer and the configuration for the shopping cart is at http://services.arjuna.com:8080/wscafdemo/config.jsp.
Wednesday, December 01, 2004
Transaction interoperability: myth or reality?
I've been reading some interesting posts on a few mailing lists about transaction interoperability and whether a) it's possible, b) desirable, and c) how to achieve it. This is an issue that's pretty close to my heart because I've been working with various people over the years, such as Eric, Ian and Tom, to accomplish it in one environment or another. I hope what follows is an objective discussion.
The short answer to a) is "yes" it's possible. The longer answer is "it often takes a lot of effort to do it, and sometimes you have to go through more hoops than you really should". Historically transaction interoperability has been a kind of Holy Grail, because the likes of CICS, Tuxedo and DEC ACMS were backed by companies who had the muscle to push the homogeneous software pattern: one implementation throughout the organisation. The obvious result of this was vendor lock-in, but the knock on effect was that if you really really really needed interoperability then you'd pay one or more of these companies to tailor a solution for you. A win-win scenario for them, but not particularly attractive to the customer.
As a result, there have been several efforts to get interoperability over the years, the most notable probably being the CORBA Object Transaction Service (OTS). Unfortunately, the early versions of the OTS (prior to 1.2) suffered from the general non-interoperability of CORBA implementations (don't get me started on the BOA!) which meant that even if you had the same OTS implementation running at both sides of a conversation, unless it was running on the same CORBA ORB, interoperability was once again tricky to achieve. Fortunately the likes of Steve, Michi and others helped to massage CORBA into an interoperable distributed system (almost a decade after the OMG was first established), and OTS 1.2 and beyond became more and more interoperable. Unfortunately for one reason or another (the BOA being one of them), the OTS take up was slower than originally imagined and even today interoperability at this level isn't great.
However, that's where Web Services can help. I've said this before, but I think it's important enough to say again: Web Services are as much about interoperability as they are about Internet scale computing. We're seeing a lot of pull for them in the interoperability space. Fortunately the Web Services transactions specifications from OASIS and IBSoft provide an Atomic Transaction model that is designed specifically for interoperability (I'm biased here, but I'd say that the one in WS-TXM is better for this). So, interoperability at this level has become a reality and we're starting to see implementations of different underlying transaction services talking to each other!
OK, so some of the above is actually an answer to c). But back to b) and I think this is a no-brainer in today's world of cost-cutting and company mergers; the notion of a single implementation of anything, be it database, application server or transaction service, is simply no longer true (if it ever truly was). Companies that grow through acquisitions typically tend not to be able to have edicts about scrapping existing infrastructural investments in favour of the current software vendor of the day. So, interoperability within the organisation is happening as a fact of life today. Interoperability across organisations (even at the level of traditional ACID transactions) will happen, but it'll be less prevelant (simply because the ACID transaction model doesn't really work in that world).
That's not to say that all transaction services need to support interoperability. There are bound to be environments where interoperability is not needed by default and where extra work may be necessary when and if it ever is needed. But I think these are niche cases and should be carefully examined before going ahead.
The short answer to a) is "yes" it's possible. The longer answer is "it often takes a lot of effort to do it, and sometimes you have to go through more hoops than you really should". Historically transaction interoperability has been a kind of Holy Grail, because the likes of CICS, Tuxedo and DEC ACMS were backed by companies who had the muscle to push the homogeneous software pattern: one implementation throughout the organisation. The obvious result of this was vendor lock-in, but the knock on effect was that if you really really really needed interoperability then you'd pay one or more of these companies to tailor a solution for you. A win-win scenario for them, but not particularly attractive to the customer.
As a result, there have been several efforts to get interoperability over the years, the most notable probably being the CORBA Object Transaction Service (OTS). Unfortunately, the early versions of the OTS (prior to 1.2) suffered from the general non-interoperability of CORBA implementations (don't get me started on the BOA!) which meant that even if you had the same OTS implementation running at both sides of a conversation, unless it was running on the same CORBA ORB, interoperability was once again tricky to achieve. Fortunately the likes of Steve, Michi and others helped to massage CORBA into an interoperable distributed system (almost a decade after the OMG was first established), and OTS 1.2 and beyond became more and more interoperable. Unfortunately for one reason or another (the BOA being one of them), the OTS take up was slower than originally imagined and even today interoperability at this level isn't great.
However, that's where Web Services can help. I've said this before, but I think it's important enough to say again: Web Services are as much about interoperability as they are about Internet scale computing. We're seeing a lot of pull for them in the interoperability space. Fortunately the Web Services transactions specifications from OASIS and IBSoft provide an Atomic Transaction model that is designed specifically for interoperability (I'm biased here, but I'd say that the one in WS-TXM is better for this). So, interoperability at this level has become a reality and we're starting to see implementations of different underlying transaction services talking to each other!
OK, so some of the above is actually an answer to c). But back to b) and I think this is a no-brainer in today's world of cost-cutting and company mergers; the notion of a single implementation of anything, be it database, application server or transaction service, is simply no longer true (if it ever truly was). Companies that grow through acquisitions typically tend not to be able to have edicts about scrapping existing infrastructural investments in favour of the current software vendor of the day. So, interoperability within the organisation is happening as a fact of life today. Interoperability across organisations (even at the level of traditional ACID transactions) will happen, but it'll be less prevelant (simply because the ACID transaction model doesn't really work in that world).
That's not to say that all transaction services need to support interoperability. There are bound to be environments where interoperability is not needed by default and where extra work may be necessary when and if it ever is needed. But I think these are niche cases and should be carefully examined before going ahead.
Subscribe to:
Posts (Atom)