Since I'm now the Development Manager for JBossESB (amongst my roles), I'm obviously interested in things such as JBI and SCA and how they might relate. So I've been collecting articles.
Here's a nice discussion by Edwin Khodabakchian. Another by JJ (BTW, congratulations on the more to SAP JJ).
Obviously both Oracle and SAP are co-authors on SCA, so for those people who thing the above articles may not be 100% objective, there's an interesting one here, that makes a stronger statement concerning the overlap between JBI and SCA.
There are some Gartner comments here and here.
And finally one analysts take on a fairly obvious missing co-author, with which I agree.
Any more, please post to the comments.
Friday, December 16, 2005
Monday, December 05, 2005
Why JBoss?
So why have I moved across to JBoss? Well it hasn't been an easy choice, as quite a few people out there will know. Firstly I was one of the founders of the company. I was also a director of Arjuna Technologies and being a director of any company imposes certain obligations on you that don't come with normal employment contracts. I want Arjuna Technologies to succeed, because I believe in what we've been trying to do and also for the many shareholders who have supported us over the years.
We've been working on having a closer relationship between Arjuna Technologies and JBoss almost from the start. The fact that Bob Bickel is on our advisory board and also works for JBoss is a happy coincidence (we like to point out that Bob joined us first!) When the opportunity finally arose, it was simply too good for the company to pass up. Also to be honest, the way open source has burst onto the software scene, if the likes of IBM are feeling the pinch then you can bet that either every other middleware vendor is too, or is feeling OSS breathing down their necks! Nuff said.
The way JBoss operates when getting involved with any project, be it cacheing or transactions, is that they need to have a critical mass of knowledge as well as the software. So, as well as the transaction software that we have developed over the past 20 years, they needed the knowledge to compliment it. And that's where I came in.
I think this is an exciting opportunity for JBoss and Arjuna. It's also an extremely important deal for both companies. Which is why I, along with many other people in Arjuna, worked hard to ensure it happened. In some ways it hasn't been an easy decision to make, but in others it was the only thing that made sense.
We've been working on having a closer relationship between Arjuna Technologies and JBoss almost from the start. The fact that Bob Bickel is on our advisory board and also works for JBoss is a happy coincidence (we like to point out that Bob joined us first!) When the opportunity finally arose, it was simply too good for the company to pass up. Also to be honest, the way open source has burst onto the software scene, if the likes of IBM are feeling the pinch then you can bet that either every other middleware vendor is too, or is feeling OSS breathing down their necks! Nuff said.
The way JBoss operates when getting involved with any project, be it cacheing or transactions, is that they need to have a critical mass of knowledge as well as the software. So, as well as the transaction software that we have developed over the past 20 years, they needed the knowledge to compliment it. And that's where I came in.
I think this is an exciting opportunity for JBoss and Arjuna. It's also an extremely important deal for both companies. Which is why I, along with many other people in Arjuna, worked hard to ensure it happened. In some ways it hasn't been an easy decision to make, but in others it was the only thing that made sense.
Thursday, November 24, 2005
End of an era?
By now you may have heard about the Arjuna and JBoss deal: basically JBoss acquires the transaction technology that Arjuna has developed independently or when working in Hewlett-Packard, which we then acquired from them when we span out. A consequence of this acquisition is that I now work for JBoss. I'll blog on the deal itself later, but for now I wanted to say a few things about leaving Arjuna Technologies.
This isn't exactly the end I had in mind when we span out of HP and started Arjuna Technologies Ltd (ATL). I've always seen ATL as a logical extension of my original start-up, Arjuna Solutions, and I think deep down I thought company acquisition was a good possibility too. Being a director, obviously the exit-strategy benefits to the shareholders were always upmost in my mind, but there's always a selfish component (and no, it's not monetary): I've been in the enviable position to have worked with the same group of friends for the best part of twenty years (if we're counting, then it's 18+ years for Stuart and I) and I would like to have had that continue. Of course we've all had our disagreements and ups-and-downs over the years (who hasn't?!), but it's always been fun and interesting.
Although we're not moving far (literally down one floor and across a corridor!) that daily interaction will be something I'll miss a lot. After I received word that the deal had completed, my first thought wasn't about the work ahead, but it was "It's not my company any more :-(". Strange.
I obviously wish everyone at ATL the best of luck in the future, but particularly Stuart, Barry, Steve and Dave. It has been fun (most of the time) and it's a period in my life I wouldn't change. I've learnt a lot from Arjuna Solutions and Arjuna Technologies and I only wish the "exit strategy" had been more inclusive!
With any luck our paths will continue to cross.
This isn't exactly the end I had in mind when we span out of HP and started Arjuna Technologies Ltd (ATL). I've always seen ATL as a logical extension of my original start-up, Arjuna Solutions, and I think deep down I thought company acquisition was a good possibility too. Being a director, obviously the exit-strategy benefits to the shareholders were always upmost in my mind, but there's always a selfish component (and no, it's not monetary): I've been in the enviable position to have worked with the same group of friends for the best part of twenty years (if we're counting, then it's 18+ years for Stuart and I) and I would like to have had that continue. Of course we've all had our disagreements and ups-and-downs over the years (who hasn't?!), but it's always been fun and interesting.
Although we're not moving far (literally down one floor and across a corridor!) that daily interaction will be something I'll miss a lot. After I received word that the deal had completed, my first thought wasn't about the work ahead, but it was "It's not my company any more :-(". Strange.
I obviously wish everyone at ATL the best of luck in the future, but particularly Stuart, Barry, Steve and Dave. It has been fun (most of the time) and it's a period in my life I wouldn't change. I've learnt a lot from Arjuna Solutions and Arjuna Technologies and I only wish the "exit strategy" had been more inclusive!
With any luck our paths will continue to cross.
Friday, November 18, 2005
WS-TX face-to-face meeting
I'm on my way back from the inaugral OASIS WS-TX face-to-face meeting. I think the two day meeting went well - certainly better than expected. Eric is one of the chairs, along with Ian, who I've worked with before on the CORBA Activity Service and it's J2EE equivalent. It's a small world!
Eric has some interesting comments on the face-to-face. I'd just like to add that after the meeting I'm even more convinced that WS-CAF has a lot to offer this new TC, both in terms of experiences gained and as a place to do additional work, such as heuristics.
Greg also has something to say on the subject, but I think WS-Coordination isn't the right building block for sessions: it should be WS-Context. Greg also has a reference to the XML 2005 presentation he just presented, based on our paper.
Eric has some interesting comments on the face-to-face. I'd just like to add that after the meeting I'm even more convinced that WS-CAF has a lot to offer this new TC, both in terms of experiences gained and as a place to do additional work, such as heuristics.
Greg also has something to say on the subject, but I think WS-Coordination isn't the right building block for sessions: it should be WS-Context. Greg also has a reference to the XML 2005 presentation he just presented, based on our paper.
Saturday, November 12, 2005
Interesting statistics for WWW 2006 submissions
This year we've got 13 tracks for paper submissions. Here are the statistics:
Browsers and User Interfaces: 51 papers
Data Mining: papers 98 papers
E* Applications: papers 105
Hypermedia and Multimedia: 23 papers
Performance, Reliability and Scalability: 34 papers
Pervasive Web and Mobility: 14 papers
Search: 128 papers
Security: 37 papers
Semantic Web: 75 papers
Web Engineering: 61 papers
XML and Web Services: 67 papers
Developing Regions: 0 papers
Industrial Practice and Experience: 5 papers
Update: overall, paper submissions are up 25% on WWW2005.
Browsers and User Interfaces: 51 papers
Data Mining: papers 98 papers
E* Applications: papers 105
Hypermedia and Multimedia: 23 papers
Performance, Reliability and Scalability: 34 papers
Pervasive Web and Mobility: 14 papers
Search: 128 papers
Security: 37 papers
Semantic Web: 75 papers
Web Engineering: 61 papers
XML and Web Services: 67 papers
Developing Regions: 0 papers
Industrial Practice and Experience: 5 papers
Update: overall, paper submissions are up 25% on WWW2005.
And so it begins!
The submission deadline for WWW2006 finally passed this morning (10am my time). We got 67 paper for the XML & Web Services track that I'm chairing. This year we're using the OpenConf system for paper reviews and assignments. It's pretty good and certainly better than most of the ones I've used over the years. However, there are still issues with it that made my role of assigning papers more problematical than it should have been. Maybe if I ever find some spare time I'll sit down and write the ulimate paper reviewing system; with 20 years of reviewing experience behind me, how hard can it be ;-)?
Anyway, the assignments are done. We ended up with a couple more papers per reviewer than I expected, due to the high number of submissions and the fact that a couple of PC members dropped out at the very last minute. Now we've got until the 24th of December to do the reviews :-(
Anyway, the assignments are done. We ended up with a couple more papers per reviewer than I expected, due to the high number of submissions and the fact that a couple of PC members dropped out at the very last minute. Now we've got until the 24th of December to do the reviews :-(
Tuesday, November 08, 2005
WS-TX presentation
The first face-to-face meeting of the new OASIS WS-TX technical committee is next week. As one of the co-authors I'd try to attend anyway, but as luck would have it, I'm also giving one of the presentations: on WS-AtomicTransaction. Hopefully there'll be discussion around WS-CAF, as I strongly believe that there is scope for collaborative work between these two committees. I'm in the process of writing a blog entry on that subject for WebServices.org, so watch this space.
Monday, November 07, 2005
Friday, November 04, 2005
WWW 2006 submission deadline extended
Extension: The submission deadline for WWW2006 refereed papers has been extended to Friday November 11 11:59PM Hawaii time (hard deadline).
As you may know, on Sunday October 30 there was a significant fire at the site hosting the www2006 server. No one will be allowed into the building until Christmas and half the research labs are completely destroyed.
We attempted to mask the failure with a temporary site and have restored the site as quickly as possible in order to maintain the original deadline. But we have become aware that DNS and routing problems are still causing locations to be unable to reach the site even as late as Friday November 4th 2005.
Given these technical difficulties, the fairest thing to do seems to be to grant a blanket 1-week extension, until November 11th 2005.
In fairness to those who submitted on time and managed to find the web site, we will allow revised versions of papers to be uploaded up to the new deadline.
We apologize unreservedly for the confusion stemming from this fire and its aftermath - its been an interesting exercise in network complexity, and at least nobody was hurt - and we thank you for very much for submitting your work to WWW2006.
TEMPORARY WWW2006 WEBSITE: http://www.soton.ac.uk/~lac/ but SHOULD be visible at www2006.org
DIRECT ACCESS to the paper submission website: http:// www.openconf.org/www2006/
TEMPORARY EMAIL FOR ENQUIRIES: lescarr@gmail.com
Information about the fire can be found at: http://news.bbc.co.uk/1/ hi/england/hampshire/4390048.stm.
---
WWW2006 Co-chairs: Les Carr, Dave De Roure, Arun Iyengar
WWW2006 PC Chairs: Carole Goble, Mike Dahlin
As you may know, on Sunday October 30 there was a significant fire at the site hosting the www2006 server. No one will be allowed into the building until Christmas and half the research labs are completely destroyed.
We attempted to mask the failure with a temporary site and have restored the site as quickly as possible in order to maintain the original deadline. But we have become aware that DNS and routing problems are still causing locations to be unable to reach the site even as late as Friday November 4th 2005.
Given these technical difficulties, the fairest thing to do seems to be to grant a blanket 1-week extension, until November 11th 2005.
In fairness to those who submitted on time and managed to find the web site, we will allow revised versions of papers to be uploaded up to the new deadline.
We apologize unreservedly for the confusion stemming from this fire and its aftermath - its been an interesting exercise in network complexity, and at least nobody was hurt - and we thank you for very much for submitting your work to WWW2006.
TEMPORARY WWW2006 WEBSITE: http://www.soton.ac.uk/~lac/ but SHOULD be visible at www2006.org
DIRECT ACCESS to the paper submission website: http:// www.openconf.org/www2006/
TEMPORARY EMAIL FOR ENQUIRIES: lescarr@gmail.com
Information about the fire can be found at: http://news.bbc.co.uk/1/ hi/england/hampshire/4390048.stm.
---
WWW2006 Co-chairs: Les Carr, Dave De Roure, Arun Iyengar
WWW2006 PC Chairs: Carole Goble, Mike Dahlin
Thursday, November 03, 2005
URGENT WWW2006 update
---------------------------------------------------------
WWW2006 Web Site affected by serious fire
---------------------------------------------------------
Due to a serious fire at the University of Southampton, UK, the www2006.org website and mailing lists are temporarily unavailable.
THE SUBMISSION PROCESSES have been unaffected by this fire, only information about the submission url, deadlines, tracks, document formatting etc has been affected. A temporary replacement website has been set up with all the information for submitting papers, panels and tutorials.
The deadline for paper submission is UNCHANGED. It remains 4th November 2005 midnight Hawai'i time. The panel and tutorial submission deadlines are also unchanged.
Access to the format instructions on the web site has been patchy. We therefore WAIVE the formatting regulations. Submitting a 10 page pdf document in any format is sufficient.
TEMPORARY WWW2006 WEBSITE: http://www.soton.ac.uk/~lac/ SHOULD be visible at www2006.org but we know that this isn't always the case due to networking irregularities.
DIRECT ACCESS to the paper submission website: http:// www.openconf.org/www2006/
TEMPORARY EMAIL FOR ENQUIRIES: lescarr@gmail.com
Information about the fire can be found at: http://news.bbc.co.uk/1/ hi/england/hampshire/4390048.stm.
---
WWW2006 Co-chairs: Les Carr, Dave De Roure, Arun Iyengar
WWW2006 PC Chairs: Carole Goble, Mike Dahlin
WWW2006 Web Site affected by serious fire
---------------------------------------------------------
Due to a serious fire at the University of Southampton, UK, the www2006.org website and mailing lists are temporarily unavailable.
THE SUBMISSION PROCESSES have been unaffected by this fire, only information about the submission url, deadlines, tracks, document formatting etc has been affected. A temporary replacement website has been set up with all the information for submitting papers, panels and tutorials.
The deadline for paper submission is UNCHANGED. It remains 4th November 2005 midnight Hawai'i time. The panel and tutorial submission deadlines are also unchanged.
Access to the format instructions on the web site has been patchy. We therefore WAIVE the formatting regulations. Submitting a 10 page pdf document in any format is sufficient.
TEMPORARY WWW2006 WEBSITE: http://www.soton.ac.uk/~lac/ SHOULD be visible at www2006.org but we know that this isn't always the case due to networking irregularities.
DIRECT ACCESS to the paper submission website: http:// www.openconf.org/www2006/
TEMPORARY EMAIL FOR ENQUIRIES: lescarr@gmail.com
Information about the fire can be found at: http://news.bbc.co.uk/1/ hi/england/hampshire/4390048.stm.
---
WWW2006 Co-chairs: Les Carr, Dave De Roure, Arun Iyengar
WWW2006 PC Chairs: Carole Goble, Mike Dahlin
Thursday, October 27, 2005
Friday, October 21, 2005
Spot the difference?
For her birthday, I bought my wife a new mobile phone, after much background research. She wanted one with a camera, but the fact it also runs Java games is a bonus. She's a Tetris fanatic, but unfortunately that game didn't come on the phone by default. So, I spent some time last night looking for a good implementation and came across this version, which downloaded first time and plays like a charm.
However, this got me thinking. The jar is small, at only 18.29K. There are versions designed for the PC that are larger (e.g., this one is 744K), but not huge in the grand scheme of things. Yet when I run the game on the mobile phone, the resources it takes are tiny compared to when I run it on my PC! I know it's the difference between J2ME and J2SE environments, but you've got to wonder: how much of the latter do you really need to run your everyday projects?
I know there's a lot of work going on around Java microcontainers, but maybe we also need effort in micro-JVMs, similar to micro-kernels of old! Time for the big monolithic JVM to bow out, I think.
However, this got me thinking. The jar is small, at only 18.29K. There are versions designed for the PC that are larger (e.g., this one is 744K), but not huge in the grand scheme of things. Yet when I run the game on the mobile phone, the resources it takes are tiny compared to when I run it on my PC! I know it's the difference between J2ME and J2SE environments, but you've got to wonder: how much of the latter do you really need to run your everyday projects?
I know there's a lot of work going on around Java microcontainers, but maybe we also need effort in micro-JVMs, similar to micro-kernels of old! Time for the big monolithic JVM to bow out, I think.
Thursday, October 20, 2005
A message from the twilight zone
Normally Jason Bloomberg speaks for ZapThink in the real world. However, his latest comments about Oracle's Fusion middleware are definitely from another reality.
To call it Frankenstein shows a strange lack of understanding about how large middleware offerings really develop. I can't put my finger on a single one (including IBM's) that has been entirely developed in-house and from scratch. These things develop in leaps and bounds as requirements change. As companies acquire other companies, it's also inevitable that some kind of cross-pollination will occur. But to suggest that it's not uniform or cohesive in some way is very strange indeed. Maybe he just had an off day?
It's also the case that this kind of evolution rather than revolution approach happens throughout the industry. Many large organisations we've come across, have grown by acquisition and are forced to leverage all of their infrastructural investments simply because they can't afford to migrate to a homogeneous environment. That's where SOA and Web Services really come into play these days.
Now whether IBM will see Oracle as a threat in this space is a different matter.
Note also that Frankenstein was trying to advance science and defeat death. Not such a bad goal!
To call it Frankenstein shows a strange lack of understanding about how large middleware offerings really develop. I can't put my finger on a single one (including IBM's) that has been entirely developed in-house and from scratch. These things develop in leaps and bounds as requirements change. As companies acquire other companies, it's also inevitable that some kind of cross-pollination will occur. But to suggest that it's not uniform or cohesive in some way is very strange indeed. Maybe he just had an off day?
It's also the case that this kind of evolution rather than revolution approach happens throughout the industry. Many large organisations we've come across, have grown by acquisition and are forced to leverage all of their infrastructural investments simply because they can't afford to migrate to a homogeneous environment. That's where SOA and Web Services really come into play these days.
Now whether IBM will see Oracle as a threat in this space is a different matter.
Note also that Frankenstein was trying to advance science and defeat death. Not such a bad goal!
Tuesday, October 18, 2005
Thursday, October 13, 2005
OASIS WS-TX announcement
The WS-TX specifications have finally made it into OASIS, with the formation of the WS-TX Technical Committee. It's been a long time getting here!
WARNING: Reference Parameters considered bad for your health
There's been a discussion going on over at the WS-Addressing public forum around Reference Parameters, which degenerated into the need for session semantics. The specification is already more complex than it probably needs to be: not a good example of Occam's Razor. I hate to sound like a broken record, but use WS-Context. The argument for it is pretty compelling.
Just back from JBoss World 2005
We're a JBoss partner and have been for a few years. We went to JBoss Two a couple of years ago and there were maybe 100 people, so it was interesting to see the much larger attendance at this year's JBoss World in Barcelona. I'm not sure what the official figures are, but I'd reckon that there was in excess of 300 people, crammed into 3 presentation rooms. There were some good presentations and it was interesting to see the enthusiasm on the faces of the attendees.
The most interesting presentation I saw still confuses me slightly: it was presented by a group of belly dancers at the Elephant. I think they were trying to demonstrate vertical sector take-up of JBossAS, but something got lost in translation. It was definitely the most well attended presentation of the whole conference!
OK, it was a party! I admit it!
Update
I love Barcelona, but the airport is definitely not designed for the rapid transit of passengers. It's as if someone built a city highstreet and remembered after the fact that they hadn't put on the departure gates! Good exercise I suppose.
The most interesting presentation I saw still confuses me slightly: it was presented by a group of belly dancers at the Elephant. I think they were trying to demonstrate vertical sector take-up of JBossAS, but something got lost in translation. It was definitely the most well attended presentation of the whole conference!
OK, it was a party! I admit it!
Update
I love Barcelona, but the airport is definitely not designed for the rapid transit of passengers. It's as if someone built a city highstreet and remembered after the fact that they hadn't put on the departure gates! Good exercise I suppose.
Friday, October 07, 2005
Did H.G. Wells predict 21st Century Computer Science Students?
Back in 1895 H. G. Wells wrote The Time Machine, a story about a man who invents a time machine, travels far into the future and the experiences he has. There've been two movies: the 1960 George Pal version and the more recent 2002 version. Despite the fact that Guy Pearce is a relative (on my wife's side), I prefer the original with Rod Taylor; and any director who can bring something like The 7 Faces of Dr. Lao to the screen always has an edge anyway!
But I digress. When the Time Traveller reaches 802701 A.D. he finds that the human race has fragmented into two species: the Eloi and the Morlocks. They are markedly different in appearance, with the Eloi as smaller, more beautiful humans who live above ground, and the Morlocks as grotesque creatures who live in the perpetual darkness below ground. It turns out that the Eloi have evolved past the point where they could understand or use the advanced machinery of their ancestors and rely on the Morlocks to provide them with food; the Morlocks maintain much of the machinery which helps both species and eat the Eloi. In essence, the Eloi spend all their time playing and appreciating nature and life without knowing why things work the way they do, whereas the Morlocks spend all their time ensuring things work and can't appreciate anything else.
Over the years, the School of Computing here at the University of Newcastle has had a world class reputation for both its undergraduate and postgraduate degrees. But one of the things I've noticed is a gradual change in the subjects that undergraduates learn from when I did my degree back in the mid 1980s. Back then, we learnt software languages such as Pascal, C, 6502 assembler, 68000 assembler, did hardware design (e.g., VLSI) as well as operating systems (using Concurrent Euclid strangely enough) and network programming. For a software course, it covered a lot of depth and breadth: we were taught why things work as well as how they work.
These days, with the advent of languages such as Java, GUIs and even the Web, students are taught at a much higher level, with little or no experience of hardware or operating system principles /architecture. (Note, I've reason to believe that this is not purely a local phenomenon.) That's because industry needs a new set of skills. However, if you ever get a chance to talk to successful graduates these days, there's a definite lack of understanding about why things work at any level below the virtual machine. Now I'm not saying that everything I was taught all those years ago is still useful to me today, but it gave me (and others) an appreciation of so many different aspects that it is often surprising when something from left-field will be useful. I realise there's a trade-off to be made between time and subjects (there are a lot more topics today in computer science than there were 20 years ago), but I wonder: are we breeding a race of Eloi?
But I digress. When the Time Traveller reaches 802701 A.D. he finds that the human race has fragmented into two species: the Eloi and the Morlocks. They are markedly different in appearance, with the Eloi as smaller, more beautiful humans who live above ground, and the Morlocks as grotesque creatures who live in the perpetual darkness below ground. It turns out that the Eloi have evolved past the point where they could understand or use the advanced machinery of their ancestors and rely on the Morlocks to provide them with food; the Morlocks maintain much of the machinery which helps both species and eat the Eloi. In essence, the Eloi spend all their time playing and appreciating nature and life without knowing why things work the way they do, whereas the Morlocks spend all their time ensuring things work and can't appreciate anything else.
Over the years, the School of Computing here at the University of Newcastle has had a world class reputation for both its undergraduate and postgraduate degrees. But one of the things I've noticed is a gradual change in the subjects that undergraduates learn from when I did my degree back in the mid 1980s. Back then, we learnt software languages such as Pascal, C, 6502 assembler, 68000 assembler, did hardware design (e.g., VLSI) as well as operating systems (using Concurrent Euclid strangely enough) and network programming. For a software course, it covered a lot of depth and breadth: we were taught why things work as well as how they work.
These days, with the advent of languages such as Java, GUIs and even the Web, students are taught at a much higher level, with little or no experience of hardware or operating system principles /architecture. (Note, I've reason to believe that this is not purely a local phenomenon.) That's because industry needs a new set of skills. However, if you ever get a chance to talk to successful graduates these days, there's a definite lack of understanding about why things work at any level below the virtual machine. Now I'm not saying that everything I was taught all those years ago is still useful to me today, but it gave me (and others) an appreciation of so many different aspects that it is often surprising when something from left-field will be useful. I realise there's a trade-off to be made between time and subjects (there are a lot more topics today in computer science than there were 20 years ago), but I wonder: are we breeding a race of Eloi?
Thursday, October 06, 2005
Tuesday, October 04, 2005
WWW 2006 CFP
============================================================
WWW2006 CALL FOR PARTICIPATION
http://www2006.org/
============================================================
The International World Wide Web Conference Committee (IW3C2) invites
you to participate in the Fifteenth International World Wide Web
Conference in Edinburgh, Scotland on May 22nd-26th 2006.
The conference is the prime venue for dissemination of Web research
and is held in association with ACM, BCS, ECS, IFIP and W3C.
*** REFEREED PAPERS (Submission Deadline: November 4, 2005)
WWW2006 seeks original papers describing research in all areas of the
web. Topics include but are not limited to
# E* Applications: E-Communities, E-Learning, E-Commerce, E-Science,
E-Government and E-Humanities
# Browsers and User Interfaces
# Data Mining
# Hypermedia and Multimedia
# Performance, Reliability and Scalability
# Pervasive Web and Mobility
# Search
# Security, Privacy, and Ethics
# Semantic Web
# Web Engineering
# XML and Web Services
# Industrial Practice and Experience (Alternate track)
# Developing Regions (Alternate track)
Detailed descriptions of each of these tracks appear
at http://www2006.org/tracks/
Submissions should present original reports of substantive new
work. Papers should properly place the work within the field, cite
related work, and clearly indicate the innovative aspects of the work
and its contribution to the field. We will not accept any paper which,
at the time of submission, is under review for or has already been
published or accepted for publication in a journal or another
conference.
New for WWW2006: We solicit submissions of "position papers"
articulating high-level architectural visions, describing challenging
future directions, or critiquing current design wisdom. Accepted
position papers will be presented at the conference and appear in the
proceedings. Both "regular papers" and "position papers" are subject
to the same rigorous reviewing process, but the emphasis may differ
--- regular papers should present significant reproducible results
while position papers may present preliminary work rich in
implications for future research.
All papers will be peer-reviewed by reviewers from an International
Program Committee. Accepted papers will appear in the conference
proceedings published by the Association for Computing Machinery
(ACM), and will also be accessible to the general public via
http://www2006.org/. Authors of all accepted papers will be required
to transfer copyright to the IW3C2.
*** TUTORIALS (Submission Deadline: EXTENDED to November 1)
A program of tutorials will cover topics of current interest to web
design, development, services, operation, use, and evaluation. These
half and full-day sessions will be led by internationally recognized
experts and experienced instructors using prepared content. For more
information and submission details see
http://www2006.org/tutorials/ .
*** PANELS (Submission Deadline: November 4th 2005)
Panels provide an interactive forum that will engage both panelists
and the audience in lively discussion of important and often
controversial issues. For more information and submission details see
http://www2006.org/panels/.
*** POSTERS (Submission Deadline: February 14th 2006)
Posters provide a forum for late-breaking research, and facilitate
feedback in an informal setting. Posters are peer-reviewed. The poster
area provides an opportunity for researchers and practitioners to
present and demonstrate their recent web-related research, and to
obtain feedback from their peers in an informal setting. It gives
conference attendees a way to learn about innovative works in progress
in a timely and informal manner. Formatting and submission
requirements are available at http://www2006.org/posters/.
IMPORTANT DATES
Conference: May 22nd-26th 2006
Submission Deadlines:
Workshop proposal: October 1, 2005
Tutorial proposal: November 1, 2005
Paper (regular): November 4, 2005
Paper (alternate track): November 4, 2005
Panel proposal: November 4, 2005
Poster: February 14, 2006
Acceptance Notification:
Workshop proposal: November 1, 2005
Tutorial proposal: December 1, 2005
Paper (regular): January 27, 2006
Panel proposal: January 27, 2006
Paper (alternate track): February 10, 2006
Poster: March 21, 2006
WWW2006 COMMITTEES AND CHAIRS
CONFERENCE CO-CHAIRS
Leslie Carr (University of Southampton, UK)
Dave De Roure (University of Southampton, UK)
Arun Iyengar (IBM T.J. Watson Research Center, USA)
PROGRAM COMMITTEE CO-CHAIRS
Mike Dahlin (University of Texas, USA)
Carole Goble (University of Manchester, UK)
TRACK VICE CHAIRS AND DEPUTY VICE CHAIRS
E* Applications: E-Communities, E-Learning, E-Commerce, E-Science,
E-Government, and E-Humanities
E-Government, E-Humanities
Mark Manasse (Microsoft Research, USA)
Bertram Ludaescher (UC Davis/SDSC, USA)
Wolfgang Nejdl Universitat Hannover, Germany)
Browsers and User Interfaces
Yoelle Maarek (IBM Haifa Research Lab, Israel)
Krishna Bharat (Google)
Data Mining
Ramakrishnan Srikant (IBM Almaden Research Center, USA)
Soumen Chakrabarti (IIT Bombay, India)
Hypermedia and Multimedia
Lloyd Rutledge (CWI, Netherlands)
Wei-Ying Ma (Microsoft Research, China)
Performance, Reliability and Scalability
Misha Rabinovich (AT&T, USA)
Jeff Chase (Duke University, USA)
Pervasive Web and Mobility
Venkat Padmanabhan (Microsoft, USA)
Jason Nieh (Columbia University, USA)
Search
Junghoo Cho (UCLA, USA)
Torsten Suel (Polytechnic University, USA)
Security, Privacy, and Ethics
Ari Juels (RSA, USA)
Angelos Keromytis (Columbia University, USA)
Semantic Web
Frank van Harmelen (Vrije Universiteit, Netherlands)
Mike Uschold (Boeing)
Web Engineering
David Lowe (UTS, Australia)
Luis Olsina (Universidad Nacional de La Pampa, Argentina)
XML and Web Services
Mark Little (Arjuna, UK)
Santosh Shrivastava (University of Newcastle, UK)
Industrial Practice and Experience
Marc Najork (Microsoft Research, USA)
Andy Stanford-Clark (IBM Hursley Laboratory, UK)
Developing Regions
Eric Brewer (UC Berkeley, USA)
Krithi Ramamritham (IIT Bombay, India)
TUTORIAL AND WORKSHOP CO-CHAIRS:
Robin Chen (AT&T, USA)
Ian Horrocks (University of Manchester, UK)
Irwin King (Chinese University of Hong Kong, China)
PANELS CO-CHAIRS:
Marti Hearst (UC Berkeley, USA)
Prabhakar Raghavan (Yahoo!, USA)
DEVELOPER'S TRACK CHAIR
Jeremy Carroll (HP Labs, UK)
Mark Baker (Coactus)
POSTERS CHAIR
Bebo White (SLAC)
WWW2006 CALL FOR PARTICIPATION
http://www2006.org/
============================================================
The International World Wide Web Conference Committee (IW3C2) invites
you to participate in the Fifteenth International World Wide Web
Conference in Edinburgh, Scotland on May 22nd-26th 2006.
The conference is the prime venue for dissemination of Web research
and is held in association with ACM, BCS, ECS, IFIP and W3C.
*** REFEREED PAPERS (Submission Deadline: November 4, 2005)
WWW2006 seeks original papers describing research in all areas of the
web. Topics include but are not limited to
# E* Applications: E-Communities, E-Learning, E-Commerce, E-Science,
E-Government and E-Humanities
# Browsers and User Interfaces
# Data Mining
# Hypermedia and Multimedia
# Performance, Reliability and Scalability
# Pervasive Web and Mobility
# Search
# Security, Privacy, and Ethics
# Semantic Web
# Web Engineering
# XML and Web Services
# Industrial Practice and Experience (Alternate track)
# Developing Regions (Alternate track)
Detailed descriptions of each of these tracks appear
at http://www2006.org/tracks/
Submissions should present original reports of substantive new
work. Papers should properly place the work within the field, cite
related work, and clearly indicate the innovative aspects of the work
and its contribution to the field. We will not accept any paper which,
at the time of submission, is under review for or has already been
published or accepted for publication in a journal or another
conference.
New for WWW2006: We solicit submissions of "position papers"
articulating high-level architectural visions, describing challenging
future directions, or critiquing current design wisdom. Accepted
position papers will be presented at the conference and appear in the
proceedings. Both "regular papers" and "position papers" are subject
to the same rigorous reviewing process, but the emphasis may differ
--- regular papers should present significant reproducible results
while position papers may present preliminary work rich in
implications for future research.
All papers will be peer-reviewed by reviewers from an International
Program Committee. Accepted papers will appear in the conference
proceedings published by the Association for Computing Machinery
(ACM), and will also be accessible to the general public via
http://www2006.org/. Authors of all accepted papers will be required
to transfer copyright to the IW3C2.
*** TUTORIALS (Submission Deadline: EXTENDED to November 1)
A program of tutorials will cover topics of current interest to web
design, development, services, operation, use, and evaluation. These
half and full-day sessions will be led by internationally recognized
experts and experienced instructors using prepared content. For more
information and submission details see
http://www2006.org/tutorials/ .
*** PANELS (Submission Deadline: November 4th 2005)
Panels provide an interactive forum that will engage both panelists
and the audience in lively discussion of important and often
controversial issues. For more information and submission details see
http://www2006.org/panels/.
*** POSTERS (Submission Deadline: February 14th 2006)
Posters provide a forum for late-breaking research, and facilitate
feedback in an informal setting. Posters are peer-reviewed. The poster
area provides an opportunity for researchers and practitioners to
present and demonstrate their recent web-related research, and to
obtain feedback from their peers in an informal setting. It gives
conference attendees a way to learn about innovative works in progress
in a timely and informal manner. Formatting and submission
requirements are available at http://www2006.org/posters/.
IMPORTANT DATES
Conference: May 22nd-26th 2006
Submission Deadlines:
Workshop proposal: October 1, 2005
Tutorial proposal: November 1, 2005
Paper (regular): November 4, 2005
Paper (alternate track): November 4, 2005
Panel proposal: November 4, 2005
Poster: February 14, 2006
Acceptance Notification:
Workshop proposal: November 1, 2005
Tutorial proposal: December 1, 2005
Paper (regular): January 27, 2006
Panel proposal: January 27, 2006
Paper (alternate track): February 10, 2006
Poster: March 21, 2006
WWW2006 COMMITTEES AND CHAIRS
CONFERENCE CO-CHAIRS
Leslie Carr (University of Southampton, UK)
Dave De Roure (University of Southampton, UK)
Arun Iyengar (IBM T.J. Watson Research Center, USA)
PROGRAM COMMITTEE CO-CHAIRS
Mike Dahlin (University of Texas, USA)
Carole Goble (University of Manchester, UK)
TRACK VICE CHAIRS AND DEPUTY VICE CHAIRS
E* Applications: E-Communities, E-Learning, E-Commerce, E-Science,
E-Government, and E-Humanities
E-Government, E-Humanities
Mark Manasse (Microsoft Research, USA)
Bertram Ludaescher (UC Davis/SDSC, USA)
Wolfgang Nejdl Universitat Hannover, Germany)
Browsers and User Interfaces
Yoelle Maarek (IBM Haifa Research Lab, Israel)
Krishna Bharat (Google)
Data Mining
Ramakrishnan Srikant (IBM Almaden Research Center, USA)
Soumen Chakrabarti (IIT Bombay, India)
Hypermedia and Multimedia
Lloyd Rutledge (CWI, Netherlands)
Wei-Ying Ma (Microsoft Research, China)
Performance, Reliability and Scalability
Misha Rabinovich (AT&T, USA)
Jeff Chase (Duke University, USA)
Pervasive Web and Mobility
Venkat Padmanabhan (Microsoft, USA)
Jason Nieh (Columbia University, USA)
Search
Junghoo Cho (UCLA, USA)
Torsten Suel (Polytechnic University, USA)
Security, Privacy, and Ethics
Ari Juels (RSA, USA)
Angelos Keromytis (Columbia University, USA)
Semantic Web
Frank van Harmelen (Vrije Universiteit, Netherlands)
Mike Uschold (Boeing)
Web Engineering
David Lowe (UTS, Australia)
Luis Olsina (Universidad Nacional de La Pampa, Argentina)
XML and Web Services
Mark Little (Arjuna, UK)
Santosh Shrivastava (University of Newcastle, UK)
Industrial Practice and Experience
Marc Najork (Microsoft Research, USA)
Andy Stanford-Clark (IBM Hursley Laboratory, UK)
Developing Regions
Eric Brewer (UC Berkeley, USA)
Krithi Ramamritham (IIT Bombay, India)
TUTORIAL AND WORKSHOP CO-CHAIRS:
Robin Chen (AT&T, USA)
Ian Horrocks (University of Manchester, UK)
Irwin King (Chinese University of Hong Kong, China)
PANELS CO-CHAIRS:
Marti Hearst (UC Berkeley, USA)
Prabhakar Raghavan (Yahoo!, USA)
DEVELOPER'S TRACK CHAIR
Jeremy Carroll (HP Labs, UK)
Mark Baker (Coactus)
POSTERS CHAIR
Bebo White (SLAC)
Saturday, October 01, 2005
Web Services sessions
It was funny and gratifying the number of times I either heard about the need for Web Services sessions or talked with people about that subject during HPTS, and from some widely diverse individuals. No one I talked to disputed the requirement and everyone wanted standardisation. Maybe we're not doing a good enough job of advertising or it's an affect of insular company policies, but only about 50% of them had heard of WS-Context. Fortunately it didn't take long to describe what it is and to convert more people to The Cause. Let's hope it leads to some wider adoption of the technology.
One of the common (though not exclusive) themes that seemed to lead people to WS-Context was trying to use Reference Parameters and WS-Addressing in general, to achieve sessions with more than one participant and which spanned multiple invocations. It reminded me a lot of the paper we wrote on Web Services sessions. Greg, Anish, Hal and I have been finishing up the two papers on the subject that were accepted for XML 2005 and ECOWS 2005. More good opportunities to spread the word.
One of the common (though not exclusive) themes that seemed to lead people to WS-Context was trying to use Reference Parameters and WS-Addressing in general, to achieve sessions with more than one participant and which spanned multiple invocations. It reminded me a lot of the paper we wrote on Web Services sessions. Greg, Anish, Hal and I have been finishing up the two papers on the subject that were accepted for XML 2005 and ECOWS 2005. More good opportunities to spread the word.
Friday, September 30, 2005
HPTS: the return
I'm back from HPTS. From what I've been told, the submitted papers and presentations should be available at the web site soon. Definitely worth a look. Savas had a great first time experience and was even designated the unofficial workshop photographer - until his batteries ran out!
There were some really good presentations, but it's always the talks in and around the events that I find the most interesting. Maybe there are other conferences/workshops like this, but I've never been to anything where the sense of community is so high and people are able to transcend any normal company relationships. The feeling is more akin to a group of co-workers/co-researchers all collaborating on the same problem and getting together to share experiences (often over the odd beer or two).
In answer to my earlier question, there were more new faces than I saw when we arrived and the quality of the work is just as high as in previous years. Probably my only disappointment with this year's workshop was its almost total lack of controversy and heated (though good) discussions: they've always been a characteristic of previous workshops, but not this time. It's a shame, because I always find them stimulating. However, nothing can really spoil these workshops, particularly when you look at the the surroundings.
My presentation went alright, though it is very difficult to summarise 20 years of development into 20 minutes! After talking to Jim it's clear that the work his group is doing on their Lightweight Transaction Manager is very similar to what we did on Arjuna with recoverable and durable+recoverable objects. It encourages developers to use transactions as a fault-tolerant structuting mechanism without having to always suffer the performance penalties. If it's supported in a promotable manner too, this can be a very nice and powerful facility to provide developers.
The trip back was slightly more eventful than the one down. Our original plans were to drive back and to stop somewhere to eat. Unfortunately, my flight out didn't allow us that option, so we had to go straight up 101. Well, that was the plan. Maybe it was Paul's radio selections, including Billy Joel, Madonna or the Bee Gees, or the discussions about work to come that put Savas off, but we ended up taking a more scenic route to the freeway than intended (and I bet we couldn't retrace our steps now if we wanted to). I was slightly late checking in for the flight back and didn't get a chance to eat prior to take-off, but that (and the serious case of jet-lag) was a small price to pay for the trip.
There were some really good presentations, but it's always the talks in and around the events that I find the most interesting. Maybe there are other conferences/workshops like this, but I've never been to anything where the sense of community is so high and people are able to transcend any normal company relationships. The feeling is more akin to a group of co-workers/co-researchers all collaborating on the same problem and getting together to share experiences (often over the odd beer or two).
In answer to my earlier question, there were more new faces than I saw when we arrived and the quality of the work is just as high as in previous years. Probably my only disappointment with this year's workshop was its almost total lack of controversy and heated (though good) discussions: they've always been a characteristic of previous workshops, but not this time. It's a shame, because I always find them stimulating. However, nothing can really spoil these workshops, particularly when you look at the the surroundings.
My presentation went alright, though it is very difficult to summarise 20 years of development into 20 minutes! After talking to Jim it's clear that the work his group is doing on their Lightweight Transaction Manager is very similar to what we did on Arjuna with recoverable and durable+recoverable objects. It encourages developers to use transactions as a fault-tolerant structuting mechanism without having to always suffer the performance penalties. If it's supported in a promotable manner too, this can be a very nice and powerful facility to provide developers.
The trip back was slightly more eventful than the one down. Our original plans were to drive back and to stop somewhere to eat. Unfortunately, my flight out didn't allow us that option, so we had to go straight up 101. Well, that was the plan. Maybe it was Paul's radio selections, including Billy Joel, Madonna or the Bee Gees, or the discussions about work to come that put Savas off, but we ended up taking a more scenic route to the freeway than intended (and I bet we couldn't retrace our steps now if we wanted to). I was slightly late checking in for the flight back and didn't get a chance to eat prior to take-off, but that (and the serious case of jet-lag) was a small price to pay for the trip.
Monday, September 26, 2005
On the road to HPTS
We finally arrived at HPTS and it was a nice journey. The four of us (myself, Paul Watson, Savas and Dene Kuo) hired a car and drove from San Francisco down Highway 1. We were all probably showing our ages, listening to a-ha, Bon Jovi and others I care not to mention, as we drove. The scenery is great, the drive is sedate and I'd recommend it to anyone who has a chance. We stopped at a few places to stretch out legs and just take it easy: Santa Cruz was great for lunch, despite it being overrun with highschool cheerleaders, like something out of Bring It On, and we stopped for coffee in Carmel, with a walk on the beach. We finally arrived at Asilomar and as usual with this place, it's like stepping out of the real world and into the Twilight Zone: beauty, quiet, little of the 21st century comforts you're used to, and no real concept of time to interrupt some good work to come.
Today was mainly checking in and socialising with the rest of the invited attendees. It's been good to catch up with Jim Johnson, Jim Grey, Pat Helland and others I probably only get to see at this workshop. This year there seem to be less new faces than in previous years. I'm not sure what that means: are transactions so "un-cool" that it doesn't attract people as much, have all the really interesting and ground breaking things in transactions been done already, or something entirely different? I'm looking forward to tomorrow when the real work begins; maybe that'll help answer the question.
Update: I've been asked by Paul to point out that although he was controlling the radio in the car, he wasn't responsible for what we listened to. ;-)
Today was mainly checking in and socialising with the rest of the invited attendees. It's been good to catch up with Jim Johnson, Jim Grey, Pat Helland and others I probably only get to see at this workshop. This year there seem to be less new faces than in previous years. I'm not sure what that means: are transactions so "un-cool" that it doesn't attract people as much, have all the really interesting and ground breaking things in transactions been done already, or something entirely different? I'm looking forward to tomorrow when the real work begins; maybe that'll help answer the question.
Update: I've been asked by Paul to point out that although he was controlling the radio in the car, he wasn't responsible for what we listened to. ;-)
Sunday, September 25, 2005
WS-Context useage
I'm at HPTS this week. Will blog about that when I get a chance and if I can find internet access at Asilomar (currently at an airport hotel for the night). That's the thing about a retreat like Asilomar: no TV, no phones in rooms and little/no connectivity; I may even have to resort to the good 'ol modem via the pay-phone!
Anyway, a friend from the School of Computing at Newcastle Univeristy pointed me at this, which I think is really interesting. I'm glad to see more groups taking up the WS-Context model and hope it continues.
Update: I should thank Savas for his help on promoting WS-Context; this is one of the results.
Anyway, a friend from the School of Computing at Newcastle Univeristy pointed me at this, which I think is really interesting. I'm glad to see more groups taking up the WS-Context model and hope it continues.
Update: I should thank Savas for his help on promoting WS-Context; this is one of the results.
Tuesday, September 06, 2005
HPTS agenda
The HPTS agenda is shaping up. Although the three papers that I/we submitted were accepted, I'm only talking about the evolution of ATS paper. But that's good: I'm having enough trouble finding time to write just that one presentation!
Friday, September 02, 2005
Microsoft and ESBs
Here's an interesting article. So far ESBs have played predominately in the Java market, but there's nothing inherent in the concept to limit it. So, it's interesting to hear Microsoft's take on things.
Wednesday, August 03, 2005
Blog vacation
I'm off to Canada tomorrow for a holiday. No blogging allowed I'm afraid. Back in 2 weeks.
Sunday, July 10, 2005
Yet another container architecture?
Don Box makes a good point about JBI versus EJB containers, which is essentially: why another container in J2EE; isn't the application server sufficient? As Steve points out, however, and I agree with, we shouldn't be looking at this as yet another container, but rather as a way of facilitating a micro-container architecture (cf micro-kernel). In fact, back when there was an HP middleware division we tried to standardise on an approach that had been used successfully by the HP-AS guys to achieve this. There was even an attempt to standardise this with JSR 111, led by my friend and co-author Jon Maron.
But I think Don still has a point: until there's a standardised approach to this, some formal way of describing these different containers as aspects of the same core infrastructure, it's just one approach that some vendors may take, whereas others may re-invent the wheel and impose different interaction patterns on users, resulting in the JBI container as a truly different beast to an EJB container. Maybe it's time to revisit the CSF?
But I think Don still has a point: until there's a standardised approach to this, some formal way of describing these different containers as aspects of the same core infrastructure, it's just one approach that some vendors may take, whereas others may re-invent the wheel and impose different interaction patterns on users, resulting in the JBI container as a truly different beast to an EJB container. Maybe it's time to revisit the CSF?
Thursday, July 07, 2005
San Francisco restaurant recommendation
While having our WS-CAF face-to-face during JavaOne, we had the usual TC dinner. This time, on a recommendation from one of Eric's colleagues, we went to Yabbies, a seafood restaurant. I have to say that it was very good and I'd certainly like to repeat the experience. Though next time I'd probably go for the special that Greg and Eric shared. So if you're in the area I'd definitely recommend it for at least one visit!
Wednesday, July 06, 2005
WWW2006 Call for Papers
In case you missed it, the Call for Papers for WWW2006 is out. Now I need to decide whether I want the overhead of submitting something as well as reviewing!
SOAP discussion continues
The discussion about SOAP performance has moved to here and I feel like I should interject again. And once again it's a case of agreeing with both sides up to a point.
I'd like to consider myself one of those distributed system's experts that Michi mentions, given that I've been building them in various flavours since the mid 1980's. With that in mind, I have to agree with Michi that as far as distributed systems go, SOAP and Web Services are poor. Ignoring other problems such as the fact that there's still a lack of an overall architecture and concentrating solely on performance, then it's no contest. Having to resort to hardware to improve performance is the best answer that a poor implementation can achieve and it's certainly not something to be lauded (and I'm fairly sure that's not what Savas is trying to do). Sun tried that many years ago with Java (anyone else remember the JavaStation ?) If I were building a distributed application and I controlled the entire infrastructure, or simply didn't have to worry about interacting with heterogeneous technologies, then I'd consider CORBA or maybe even ICE (I still haven't had time to try this yet Michi). SOAP (and probably even XML for anything other than configuration) wouldn't factor into my choices in this case.
However, as I've said before, if I need to worry about interoperability with other organisations, or even just between departments in large organisations, then I'd have second thoughts. It's certainly true to say that I'd prefer to use something like CORBA if I had my way (though even that has its deficiences compared to some of the stuff we did back in the 1980's). Unfortunately there are commercial realities that hit home pretty quick if you try to do this and SOAP is the only choice these days. Arguing against that is like trying to persuade the sea to stay put. SOAP (and Web Services) are for interoperability as well as for the Web and there is so much backing for this from the industry that it's getting easier and easier to achieve. Interoperability isn't something that's only recently come on the scene and trying to achieve it with technologies such as CORBA was always possible, but more difficult than it probably needed to be. However, being one of those distributed system experts doesn't mean that I have to like this current state of affairs, and it's been the subject of many good discussions I've had with the likes of Savas and Jim over the years. But pragmatic realities can't be shaken off.
Fortunately I'm certain that Web Services and SOAP are going have to improve: though how long it will take is something I'm not willing to take a stab at. A good analogy comes if you consider natural languages for instance. There may well be an argument that English isn't the best global language and perhaps something like French or German would be a better choice. For example, I've heard arguments that English isn't structured enough and has many illogical rules that make teaching it as a foreign language difficult. There are people who want to change, but that's another battle that can't be won. Due to historical events, English has become the language of commerce and a fairly global language too. However, what we speak today isn't the same as was spoken in the tenth century or even the eighteen century: it has evolved to take into account the changing needs of society (e.g., slang). This is how I see SOAP (and Web Services) evolving too. OK, it would be great if we could all agree now to adopt something that is more efficient and has a better architecture, but that isn't going to happen. But I do believe things will evolve and improve. Let's hope it doesn't take a couple of centuries though.
There's a separate discussion about RPC versus messaging, but that is irrespective of the underlying distributed system technology. We've had the same discussions in CORBA (e.g., this) , J2EE and other technologies going back as far as I can remember. This current discussion arose purely around the area of SOAP performance and I feel that it's starting to drift into other things.
In summary then: guys - you're not arguing about the same thing. I think that we can all agree that the performance of SOAP is poor compared to other platforms (and that there are a number of other problems with it). Pointing that out is fair enough and definitely shouldn't be ignored (how else does our industry progress than by peer review in one form or another?) If people were saying that SOAP should be static and not evolve and improve, then I think it would be incumbent on us all to point out the problems as much as we could. I don't think any of the discussions I've seen so far have implied that this is the case. But it isn't worth arguing that X has better performance than Y if X is meant for one problem domain and Y is meant for another; sometimes performance really is secondary for many customers.
I also don't think that this is a case of smart people defending bad ideas. I think it certainly would be if those smart people believed that the status quo was good enough. However, I didn't read that into Jim's original post.
I'd like to consider myself one of those distributed system's experts that Michi mentions, given that I've been building them in various flavours since the mid 1980's. With that in mind, I have to agree with Michi that as far as distributed systems go, SOAP and Web Services are poor. Ignoring other problems such as the fact that there's still a lack of an overall architecture and concentrating solely on performance, then it's no contest. Having to resort to hardware to improve performance is the best answer that a poor implementation can achieve and it's certainly not something to be lauded (and I'm fairly sure that's not what Savas is trying to do). Sun tried that many years ago with Java (anyone else remember the JavaStation ?) If I were building a distributed application and I controlled the entire infrastructure, or simply didn't have to worry about interacting with heterogeneous technologies, then I'd consider CORBA or maybe even ICE (I still haven't had time to try this yet Michi). SOAP (and probably even XML for anything other than configuration) wouldn't factor into my choices in this case.
However, as I've said before, if I need to worry about interoperability with other organisations, or even just between departments in large organisations, then I'd have second thoughts. It's certainly true to say that I'd prefer to use something like CORBA if I had my way (though even that has its deficiences compared to some of the stuff we did back in the 1980's). Unfortunately there are commercial realities that hit home pretty quick if you try to do this and SOAP is the only choice these days. Arguing against that is like trying to persuade the sea to stay put. SOAP (and Web Services) are for interoperability as well as for the Web and there is so much backing for this from the industry that it's getting easier and easier to achieve. Interoperability isn't something that's only recently come on the scene and trying to achieve it with technologies such as CORBA was always possible, but more difficult than it probably needed to be. However, being one of those distributed system experts doesn't mean that I have to like this current state of affairs, and it's been the subject of many good discussions I've had with the likes of Savas and Jim over the years. But pragmatic realities can't be shaken off.
Fortunately I'm certain that Web Services and SOAP are going have to improve: though how long it will take is something I'm not willing to take a stab at. A good analogy comes if you consider natural languages for instance. There may well be an argument that English isn't the best global language and perhaps something like French or German would be a better choice. For example, I've heard arguments that English isn't structured enough and has many illogical rules that make teaching it as a foreign language difficult. There are people who want to change, but that's another battle that can't be won. Due to historical events, English has become the language of commerce and a fairly global language too. However, what we speak today isn't the same as was spoken in the tenth century or even the eighteen century: it has evolved to take into account the changing needs of society (e.g., slang). This is how I see SOAP (and Web Services) evolving too. OK, it would be great if we could all agree now to adopt something that is more efficient and has a better architecture, but that isn't going to happen. But I do believe things will evolve and improve. Let's hope it doesn't take a couple of centuries though.
There's a separate discussion about RPC versus messaging, but that is irrespective of the underlying distributed system technology. We've had the same discussions in CORBA (e.g., this) , J2EE and other technologies going back as far as I can remember. This current discussion arose purely around the area of SOAP performance and I feel that it's starting to drift into other things.
In summary then: guys - you're not arguing about the same thing. I think that we can all agree that the performance of SOAP is poor compared to other platforms (and that there are a number of other problems with it). Pointing that out is fair enough and definitely shouldn't be ignored (how else does our industry progress than by peer review in one form or another?) If people were saying that SOAP should be static and not evolve and improve, then I think it would be incumbent on us all to point out the problems as much as we could. I don't think any of the discussions I've seen so far have implied that this is the case. But it isn't worth arguing that X has better performance than Y if X is meant for one problem domain and Y is meant for another; sometimes performance really is secondary for many customers.
I also don't think that this is a case of smart people defending bad ideas. I think it certainly would be if those smart people believed that the status quo was good enough. However, I didn't read that into Jim's original post.
Saturday, July 02, 2005
JavaOne WS-CAF face-to-face
We've just finished our WS-CAF face-to-face meeting. It was hosted by Oracle and arranged to coincide with JavaOne so we could maximise attendance. In the end we had a majority of the TC turn up in person and several dialed in to the conference call.
We made good progress on WS-Context and WS-CF, progressing the former to a second Public Review Draft and the latter is almost ready for it's first Public Review draft. Since the Dublin face-to-face meeting at the end of last year, we've spent most of our time working on WS-CF, and now only consistency of the WSDL and schema remain. With any luck that shouldn't take more than a few days and then, assuming the TC agrees, we can move this towards it's Public Review Draft.
Public Review is one step away from standarisation, so I'm very pleased with the way things have gone so far. The power of any specification really comes when it's standardised and adopted by others, either in product or in other specifications/standards that are themselves ultimately adopted into product. We've seen a lot of interest in WS-Context over the past year or so and I think this is reflected in what the other members of the TC have seen as well.
The rest of the time in the meeting was spent going over the WS-ACID specification. This is a really important step towards completing the entire composite application framework and in particular for addressing the critical issue of reliability in Web Services business flows. Since WS-ACID is based on traditional transaction processing concepts such as two-phase commit and synchronizations there wasn't much to discuss in terms of the actual model. The likes of Sun, Oracle, IONA and ourselves all have sufficient experience in this area to know that what we currently have in the specification looks about right.
The issues that did arise included:
(i) if and how we might define some policies for services to identify themselves as being transaction aware. The lack of a standard in the area of policy makes this difficult, but Greg has written a document on this, so hopefully we can still make progress.
(ii) if and how we might define some isolation policies for participants/resource managers. The often talked about isolation policy for ACID transactions is serializability (essentially a concurrent execution of transactions is made to appear as though those transactions executed in some serial manner). However, other approaches do exist, including read-committed and read-uncommitted. Other specifications don't say anything about different isolation levels, so there was a discussion about whether or not it is needed here. We decided that it was a feature worth looking at supporting using policies, so we'll see where this leads us.
(iii) supporting transaction enlistment inflow: rather than an importing domain (one which received a transactional request) registering participants directly with a remote coordinator immediately, information about the participants (their EPRs) are sent back in the context associated with the response and the receiver does the registration. This is an optimisation, but it may save several upcalls, particularly if the coordinator is co-located with the requester.
(iv) shared and unshared transaction support for asynchronous transactions. In a shared transaction model, clients and servers share an end-to-end transaction: when a client invokes an operation on a service and does so in the scope of a transaction, that service may enlist participants in the same transaction the client started. The unshared model is used in store and forward transports (e.g., message queues). In this model, the communication between client and server is broken into separate requests, separated by (typically reliable) transmission between routers. In this model, multiple shared transactions are used where each is executed to completion before the next one begins. There was a lot of good discussion around this and we're going to see if there is something we can do in this space. It took us a long time to address the topic in the OTS so I'm not expecting a rapid resolution.
Overall we had a great meeting and the specifications look in a good state. In many ways it was very laid back, but I suppose that's California for you!
We made good progress on WS-Context and WS-CF, progressing the former to a second Public Review Draft and the latter is almost ready for it's first Public Review draft. Since the Dublin face-to-face meeting at the end of last year, we've spent most of our time working on WS-CF, and now only consistency of the WSDL and schema remain. With any luck that shouldn't take more than a few days and then, assuming the TC agrees, we can move this towards it's Public Review Draft.
Public Review is one step away from standarisation, so I'm very pleased with the way things have gone so far. The power of any specification really comes when it's standardised and adopted by others, either in product or in other specifications/standards that are themselves ultimately adopted into product. We've seen a lot of interest in WS-Context over the past year or so and I think this is reflected in what the other members of the TC have seen as well.
The rest of the time in the meeting was spent going over the WS-ACID specification. This is a really important step towards completing the entire composite application framework and in particular for addressing the critical issue of reliability in Web Services business flows. Since WS-ACID is based on traditional transaction processing concepts such as two-phase commit and synchronizations there wasn't much to discuss in terms of the actual model. The likes of Sun, Oracle, IONA and ourselves all have sufficient experience in this area to know that what we currently have in the specification looks about right.
The issues that did arise included:
(i) if and how we might define some policies for services to identify themselves as being transaction aware. The lack of a standard in the area of policy makes this difficult, but Greg has written a document on this, so hopefully we can still make progress.
(ii) if and how we might define some isolation policies for participants/resource managers. The often talked about isolation policy for ACID transactions is serializability (essentially a concurrent execution of transactions is made to appear as though those transactions executed in some serial manner). However, other approaches do exist, including read-committed and read-uncommitted. Other specifications don't say anything about different isolation levels, so there was a discussion about whether or not it is needed here. We decided that it was a feature worth looking at supporting using policies, so we'll see where this leads us.
(iii) supporting transaction enlistment inflow: rather than an importing domain (one which received a transactional request) registering participants directly with a remote coordinator immediately, information about the participants (their EPRs) are sent back in the context associated with the response and the receiver does the registration. This is an optimisation, but it may save several upcalls, particularly if the coordinator is co-located with the requester.
(iv) shared and unshared transaction support for asynchronous transactions. In a shared transaction model, clients and servers share an end-to-end transaction: when a client invokes an operation on a service and does so in the scope of a transaction, that service may enlist participants in the same transaction the client started. The unshared model is used in store and forward transports (e.g., message queues). In this model, the communication between client and server is broken into separate requests, separated by (typically reliable) transmission between routers. In this model, multiple shared transactions are used where each is executed to completion before the next one begins. There was a lot of good discussion around this and we're going to see if there is something we can do in this space. It took us a long time to address the topic in the OTS so I'm not expecting a rapid resolution.
Overall we had a great meeting and the specifications look in a good state. In many ways it was very laid back, but I suppose that's California for you!
Wednesday, June 29, 2005
Video game character
I tried What Video Game Character Are You? and here's the result:
Not sure I'd agree with all of that - I can't remember the last time I got lost inside a building ;-)
Not sure I'd agree with all of that - I can't remember the last time I got lost inside a building ;-)
Tuesday, June 28, 2005
Grid BOF at JavaOne
I'm here at JavaOne sitting on a panel for a BOF on the Grid; Building Tomorrow's Grid. Well that was last night (9:30pm, but it was surprisingly well attended, with standing room only) and I think it went very well. There was a lot of audience participation and it definitely seemed like there was a lot of interest in grid (small 'g') computing. Overall the panel pretty much agreed with one another; the main exception was when it came to the subject of Java's role in the grid (small 'g' again): the panel split down the middle, with Richard Nicholson and Dan Hushon saying the JINI is the way forward, whilst Greg and I disagreeing. As Greg pointed out, it's unrealistic to assume that the world will be purely Java, and the bridging/wrapping approach to embedding non-Java services into Jini seems like a hack if the real answer is to simply use the right tool (aka language) for the right job.
During our individual presentations at the start of the session, we were asked to answer the following questions and I think my answers were similar to those of the rest of the panel:
(i) What is grid computing? I made the distinction between grid (small 'g') and Grid/GRID computing. I think grid has been around for many years and people simply haven't collected all of these massively parallel and distributed applications under a single categorisation. Take a look at SETI@home for example. I remember installing this when it first came out back in the early 1990's. The statistics for it today are astonishing: 3 million computers, 14 Teraflops average, 500000 years of processing power in 18 months - the equivalent of several supercomputers. It's got to be one of the most successful grids around. GRID on the other hand, is IMO an effort to try to standardise on practices, patterns and infrastructure in building grids: a great idea when you consider the number of grid toolkits that are around.
(ii) What problem does it solve? Pretty simple - not many organisations can afford supercomputers, but there are a lot of massively parallel applications out there and many computers that simply aren't used most of the time. (One member of the audience came up to me afterwards and said that his company is thinking about building a grid to use the power of 14000 machines that they've got, which are idle 40% of the time.) Using someone elses resources to do your work seems like a good idea - it's cost effective for a start!
(iii) Where is it on the hype scale? grid (small 'g') has been around for many years and most certainly isn't hype. Compared to that, GRID is more hype than reality but that's just a timing thing.
Overall I think it was a great BOF and I was pleasantly surprised to see how many people turned out. Now it's off for a book signing and definitely some session tracks.
During our individual presentations at the start of the session, we were asked to answer the following questions and I think my answers were similar to those of the rest of the panel:
(i) What is grid computing? I made the distinction between grid (small 'g') and Grid/GRID computing. I think grid has been around for many years and people simply haven't collected all of these massively parallel and distributed applications under a single categorisation. Take a look at SETI@home for example. I remember installing this when it first came out back in the early 1990's. The statistics for it today are astonishing: 3 million computers, 14 Teraflops average, 500000 years of processing power in 18 months - the equivalent of several supercomputers. It's got to be one of the most successful grids around. GRID on the other hand, is IMO an effort to try to standardise on practices, patterns and infrastructure in building grids: a great idea when you consider the number of grid toolkits that are around.
(ii) What problem does it solve? Pretty simple - not many organisations can afford supercomputers, but there are a lot of massively parallel applications out there and many computers that simply aren't used most of the time. (One member of the audience came up to me afterwards and said that his company is thinking about building a grid to use the power of 14000 machines that they've got, which are idle 40% of the time.) Using someone elses resources to do your work seems like a good idea - it's cost effective for a start!
(iii) Where is it on the hype scale? grid (small 'g') has been around for many years and most certainly isn't hype. Compared to that, GRID is more hype than reality but that's just a timing thing.
Overall I think it was a great BOF and I was pleasantly surprised to see how many people turned out. Now it's off for a book signing and definitely some session tracks.
Friday, June 24, 2005
SOAP: slow or fast?
There's an interesting discussion going on here between Michi and Jim about the performance of SOAP. I wasn't going to get involved in what could easily become a "PC verus Mac, Unix versus Windows" debate, but I'll add my 2 cents worth.
I agree with them both, to a point.
I'm sure ICE is fast (I too haven't used it). I do know that CORBA implementations these days are very fast too and message service implementations (for example) like AMS are built for speed. When I started out doing my PhD back in the mid 1980's, my first task was to help write and improve Rajdoot, one of the first RPC mechanisms around. Even then, when a fast network was 10 mbps (if you were lucky) and we used Whitechapels or Sun 360's, we could regularly get round trips on 5ms for messages up to 1K in size (packet fragmentation/re-assembly happens above this, so this was the maximum critical packet size). Not fast by today's standards, but fast back then. Having been working with SOAP and Web Services for 5 years now, I know it's slow compared to what we had in 1986, so it simply doesn't compare to what's possible these days. So, I agree with Michi's point that on that (yes, we have tried compression over the years too, and got the same results as Michi - it works, but you've got to use it carefully.)
However, and this is where I also agree with Jim, SOAP performance can be improved. The sorts of things that go on under the covers today in terms of XML parsing, for example, are pretty inefficient. Next time you want to see, just pop up something like OptimizeIt and watch what happens. I'm pretty confident that developers can and will improve on this. As an analogy, back when IONA released the first version of Orbix it was the market leader but its performance was terrible compared to later revisions. (Opcodes were shipped as strings, for a start!) I'm not singling out IONA - this is a pattern that many other ORB providers followed. So, I agree with Jim: SOAP doesn't have to be this slow - it can be improved.
But this is where I stop agreeing and come back to the fact that it's beginning to sound like the "PC verus Mac, Unix versus Windows" debates of old. You're not comparing like with like.
This is definitely a case of using the right tool for the right job, combined with some unfortunate commercial realities. If you want interoperablity with other vendors (eventually pretty much any other vendor on the planet), then you'd go the SOAP route: there is no logical argument to the contrary. CORBA didn't get mass adoption, DCE failed before it, and despite Microsoft's power, so did DCOM. Eric has some interesting things to say on the subject here, but the reason SOAP works well is because of XML, HTTP (IMO) and pretty much universal adoption. I can't see that changing. In the forseeable future, I can't see the likes of Microsoft, IBM, Oracle, BEA etc. agreeing on a single protocol and infrastructure as they have with SOAP. To be honest, I think they were forced into the current situation because of the mass take-up of the original Web: they like vendor lock-in and had managed to maintain it for decades prior to Tim's arrival on the scene.
But you pay a heavy price for this kind of interoperability. There are inherent performance problems in SOAP that I just can't see going away. We may be able to chip at the surface and perhaps even make bit dents, but fundamentally I'm confident that SOAP performance versus something like ICE (or CORBA) will always be a one-sided contest. However, a contest of interoperability will be just as one-sided, with SOAP winning. From the moment I got into Web Services, I've said that I can't see it (and SOAP) replacing distributed environments like CORBA everywhere. It frustrates me at times when I see clients trying to do just that though and then complaining that the results aren't fast enough! If I want to go off-road, I'll buy a Land Rover; but if I want speed, give me a Ferrari any day! Distributed systems such as CORBA have been heavily optimised over the years and use binary encodings as much as possible - with the resultant impact on interoperability and performance. But that is fine. That's what they're intended for. Certainly if I was interested in high performance, I wouldn't be looking at SOAP or Web Services, but at CORBA (or something similar).
So in summary: of course there will be performance improvements for the SOAP infrastructure. There may even be a slow evolution to a pure binary, extremely efficient distributed invocation mechanism that looks similar to those systems that have gone before. But it's not strictly necessary and I don't see it happening as a priority. Use SOAP for interoperability. It lowers the integration barrier. But if you are really interested in performance and/or can impose a single solution on your corporate infrastructure, you may be better off looking elsewhere, to something like CORBA, or maybe even ICE.
I agree with them both, to a point.
I'm sure ICE is fast (I too haven't used it). I do know that CORBA implementations these days are very fast too and message service implementations (for example) like AMS are built for speed. When I started out doing my PhD back in the mid 1980's, my first task was to help write and improve Rajdoot, one of the first RPC mechanisms around. Even then, when a fast network was 10 mbps (if you were lucky) and we used Whitechapels or Sun 360's, we could regularly get round trips on 5ms for messages up to 1K in size (packet fragmentation/re-assembly happens above this, so this was the maximum critical packet size). Not fast by today's standards, but fast back then. Having been working with SOAP and Web Services for 5 years now, I know it's slow compared to what we had in 1986, so it simply doesn't compare to what's possible these days. So, I agree with Michi's point that on that (yes, we have tried compression over the years too, and got the same results as Michi - it works, but you've got to use it carefully.)
However, and this is where I also agree with Jim, SOAP performance can be improved. The sorts of things that go on under the covers today in terms of XML parsing, for example, are pretty inefficient. Next time you want to see, just pop up something like OptimizeIt and watch what happens. I'm pretty confident that developers can and will improve on this. As an analogy, back when IONA released the first version of Orbix it was the market leader but its performance was terrible compared to later revisions. (Opcodes were shipped as strings, for a start!) I'm not singling out IONA - this is a pattern that many other ORB providers followed. So, I agree with Jim: SOAP doesn't have to be this slow - it can be improved.
But this is where I stop agreeing and come back to the fact that it's beginning to sound like the "PC verus Mac, Unix versus Windows" debates of old. You're not comparing like with like.
This is definitely a case of using the right tool for the right job, combined with some unfortunate commercial realities. If you want interoperablity with other vendors (eventually pretty much any other vendor on the planet), then you'd go the SOAP route: there is no logical argument to the contrary. CORBA didn't get mass adoption, DCE failed before it, and despite Microsoft's power, so did DCOM. Eric has some interesting things to say on the subject here, but the reason SOAP works well is because of XML, HTTP (IMO) and pretty much universal adoption. I can't see that changing. In the forseeable future, I can't see the likes of Microsoft, IBM, Oracle, BEA etc. agreeing on a single protocol and infrastructure as they have with SOAP. To be honest, I think they were forced into the current situation because of the mass take-up of the original Web: they like vendor lock-in and had managed to maintain it for decades prior to Tim's arrival on the scene.
But you pay a heavy price for this kind of interoperability. There are inherent performance problems in SOAP that I just can't see going away. We may be able to chip at the surface and perhaps even make bit dents, but fundamentally I'm confident that SOAP performance versus something like ICE (or CORBA) will always be a one-sided contest. However, a contest of interoperability will be just as one-sided, with SOAP winning. From the moment I got into Web Services, I've said that I can't see it (and SOAP) replacing distributed environments like CORBA everywhere. It frustrates me at times when I see clients trying to do just that though and then complaining that the results aren't fast enough! If I want to go off-road, I'll buy a Land Rover; but if I want speed, give me a Ferrari any day! Distributed systems such as CORBA have been heavily optimised over the years and use binary encodings as much as possible - with the resultant impact on interoperability and performance. But that is fine. That's what they're intended for. Certainly if I was interested in high performance, I wouldn't be looking at SOAP or Web Services, but at CORBA (or something similar).
So in summary: of course there will be performance improvements for the SOAP infrastructure. There may even be a slow evolution to a pure binary, extremely efficient distributed invocation mechanism that looks similar to those systems that have gone before. But it's not strictly necessary and I don't see it happening as a priority. Use SOAP for interoperability. It lowers the integration barrier. But if you are really interested in performance and/or can impose a single solution on your corporate infrastructure, you may be better off looking elsewhere, to something like CORBA, or maybe even ICE.
Savas on the move
It's official now: Savas is on the move to Microsoft and Don Box's team. Savas and I have talked about his moving from Paul's group for quite a while, so it's fair to say that it's not a surprise. I'm doubly happy for him and sad - where do I find a coffee buddy now!
Savas and I have known each other for many years as friends and colleagues: while at HP, he was in my transactions team and was a star! I'm absolutely sure that Savas will make the most of this opportunity, and it's a good move for him and Microsoft. Though I still think the other job was just as good Savas ;-)
Anyway, good luck my friend and at the very least we'll be able to catch up at HPTS.
Savas and I have known each other for many years as friends and colleagues: while at HP, he was in my transactions team and was a star! I'm absolutely sure that Savas will make the most of this opportunity, and it's a good move for him and Microsoft. Though I still think the other job was just as good Savas ;-)
Anyway, good luck my friend and at the very least we'll be able to catch up at HPTS.
Wednesday, June 22, 2005
EPR rules of engagement
William, an ex-HP colleague, has some interesting things to say about EPRs and how people tackle them. I'm broadly in agreement with what he has to say; he even references our paper, which is nice. Unfortunately there are a couple of places where he's wrong:
(i) on the implication that WS-Coordination is the same as WS-Context. However, I think Greg has responded to that here (though since his blog seems down I can't check).
(ii) on the implication that this formal objection we've raised is somehow against EPRs and ReferenceParameters. It isn't (though that's not to say I don't believe the latter is inherently wrong, but that is a completely different issue). As I say here this is about keeping EPRs symmetrical. To summarise, the current state of affairs is that EPRs are encapsulated entities until you need to use them, and in what case their constituent elements (e.g., the endpoint URI, ReferenceParameters etc.) appear as first-class elements in the SOAP header). As the objection describes, this can lead to a number of problems and we believe it's worth fixing now rather than try to retro-fit something later.
Hopefully given the pitfalls that William's original blog entry describes, he'll agree that this change we're after only makes things better.
(i) on the implication that WS-Coordination is the same as WS-Context. However, I think Greg has responded to that here (though since his blog seems down I can't check).
(ii) on the implication that this formal objection we've raised is somehow against EPRs and ReferenceParameters. It isn't (though that's not to say I don't believe the latter is inherently wrong, but that is a completely different issue). As I say here this is about keeping EPRs symmetrical. To summarise, the current state of affairs is that EPRs are encapsulated entities until you need to use them, and in what case their constituent elements (e.g., the endpoint URI, ReferenceParameters etc.) appear as first-class elements in the SOAP header). As the objection describes, this can lead to a number of problems and we believe it's worth fixing now rather than try to retro-fit something later.
Hopefully given the pitfalls that William's original blog entry describes, he'll agree that this change we're after only makes things better.
Sunday, June 19, 2005
HPTS 2005
I submitted 3 papers to this year's High Performance Transaction Systems workshop and have been invited to attend. The bi-annual HPTS workshop is probably my favourite workshop/conference, so I'm pleased to be accepted for the fifth successive time. The papers aren't up on the web site yet, but in summary:
(i) a paper on WS-CAF that I co-authored with Eric and Greg.
(ii) a paper on ArjunaCore that I co-authored with Santosh.
(iii) a paper on the different conceptual models of Web Services transactions: does the one-size fits all approach really work and if not, why not.
Once the papers are on some web site, I'll update the blog.
(i) a paper on WS-CAF that I co-authored with Eric and Greg.
(ii) a paper on ArjunaCore that I co-authored with Santosh.
(iii) a paper on the different conceptual models of Web Services transactions: does the one-size fits all approach really work and if not, why not.
Once the papers are on some web site, I'll update the blog.
Tuesday, June 14, 2005
A musical interlude
At last year's WS-CAF f2f in New Orleans, Eric started to tell me about Professor Longhair and even gave me a few tracks to listen to. Wonderful music and it brings back good memories of New Orleans.
Thursday, June 02, 2005
End of the Grid TX road?
There's been an effort going on at the GGF for a while now into transaction management for Grid applications. The initial idea was to clearly define the requirements space for transactions in Grid (are they different from Web Services, for example), then to take an objective look at efforts that have been going on elsewhere (e.g., Web Services) and finally to say whether or not those efforts are suitable. Ultimately, if the existing work wasn't deemed suitable for Grid transactions, the group would define a/some new transaction protocol(s); I think this last bit was always potentially the work on something that would come after us, rather than for this group.
Well, we're finally coming to the end of the road on this phase of the work. It looks like in the next few weeks we'll have a good report on everything up to, and including, recommendations for any future effort in this space by the GGF. It doesn't look like we'll be defining any new protocol(s) afterall, but if the GGF decide that some are needed after reading the report, hopefully it will be useful input to the next phase.
Well, we're finally coming to the end of the road on this phase of the work. It looks like in the next few weeks we'll have a good report on everything up to, and including, recommendations for any future effort in this space by the GGF. It doesn't look like we'll be defining any new protocol(s) afterall, but if the GGF decide that some are needed after reading the report, hopefully it will be useful input to the next phase.
Chapter in MIT book on SOC
About 2 years ago I wrote a paper for a special issue of the CACM on Web Services and transactions. A little later, I was asked by the guest editors to write an extended version for an MIT Press book on service-oriented computing. It's taken a little longer than the editors or I expected, but finally it looks like the book is coming to fruition. I got some good feedback on the chapter (which tried to cover all of the efforts in this space in an objective manner) a few weeks ago and now need to finalise the work by the end of the month.
One thing this did bring to mind is precisely how long I've been working in this area (and transactions in general). Not sure if that's a good thing or a bad thing.
One thing this did bring to mind is precisely how long I've been working in this area (and transactions in general). Not sure if that's a good thing or a bad thing.
Sunday, May 29, 2005
XTECH 2005
I did a day trip to XTECH 2005, where I was presenting on WS-CAF. A very long, but interesting day.
I wasn't presenting until later in the day, so Edd asked me to chair one of the earlier sessions of the day. The presentations I chaired were "Adoption of UBL in Denmark - business cases and experiences" and "Implementing Web Services for Healthcare - Lessons & Pitfalls", which were both very interesting for similar reasons. Firstly, they both talked about the use of ebXML in the public sector (ebXML is a big effort, so if you want to know which aspects of it they're using, I'd encourage you to read the papers), and secondly, the key to their success was pretty much the same: legislation! In one case, any Netherlands company wanting to send invoices to the government has to do it electronically and were given 8 weeks to implement the necessary support within their organisation. I forget the actual figures at the moment, but the government did a "time and motion" study of invoicing and reckoned they could say $100 million annually by doing this. And it all seems to be working!
They were both extremely interesting presentations and the audience asked some good questions that helped to pull more information from the presenters. Overall, I think this time I probably found the chairing experience more interesting than presenting!
I wasn't presenting until later in the day, so Edd asked me to chair one of the earlier sessions of the day. The presentations I chaired were "Adoption of UBL in Denmark - business cases and experiences" and "Implementing Web Services for Healthcare - Lessons & Pitfalls", which were both very interesting for similar reasons. Firstly, they both talked about the use of ebXML in the public sector (ebXML is a big effort, so if you want to know which aspects of it they're using, I'd encourage you to read the papers), and secondly, the key to their success was pretty much the same: legislation! In one case, any Netherlands company wanting to send invoices to the government has to do it electronically and were given 8 weeks to implement the necessary support within their organisation. I forget the actual figures at the moment, but the government did a "time and motion" study of invoicing and reckoned they could say $100 million annually by doing this. And it all seems to be working!
They were both extremely interesting presentations and the audience asked some good questions that helped to pull more information from the presenters. Overall, I think this time I probably found the chairing experience more interesting than presenting!
Monday, May 16, 2005
WWW2006 update
We're in the process of finalising the PC for the XML & Web Services track of WWW2006. It's shaping up to be a good PC and I hope the quality of papers we get is good too. So, get your thinking hats on and start to write. Although the CFP hasn't gone out yet, and we'll be accepting papers on a wide range of XML & Web Service related topics, this year I'd like to see some papers on fault tolerance and reliability. That topic tends to get overlooked a lot in the early days of any distributed architecture "wave", but I think we're reaching the crest of this wave and things need to change.
Friday, May 13, 2005
Asymmetrical WS-Addressing
I have to admit that I like symmetry for a number of reasons. Nature prefers symmetry in a wide range of areas and it shows up all around us. So, symmetry is good. I know it's not a hard-and-fast rule, but we've followed nature before and I reckon it still has a lot to teach us. And more pragmatically, symmetry just makes sense. To paraphrase Occam's Razor: the simplest solution is probably the right one. So, why make things more complex than they need to be?
So how does this relate to WS-Addressing? Well ever since I started to use it when the Web Services Coordination and Transaction specifications, I've been annoyed by the fact that not everything is a WS-Addr End Point Reference (EPR). Basically in WS-Addr, an EPR is a delivery address for your message and contains the URI of the service as well as other information needed to ultimately deliver that message.
When sending a message, you can define wsa:ReplyTo, wsa:FaultTo and wsa:From all as EPRs. But the actual destination address isn't an EPR! Or more accurately, it's a broken apart EPR: as a receiver, you may have got it in a wsa:ReplyTo, but if you then want to us it to send a response, you have to do some work with it first.
For example, here's a valid wsa header for an input message:
Now in a symmetrical world, I should simply be able to take the wsa:ReplyTo EPR and put it (renamed as wsa:To of course) into the SOAP header of the response message like so:
But no, I can't do that. I have to take everything out of the wsa:ReplyTo and essentially promote the contents to the root, like so:
And as you can see, it's not actually as simple as that - I've got to break apart the ReferenceParameters (which could be arbitrary in size). Why can't I just use the EPR I received as an EPR? It can't be because of deficiencies in the SOAP processing model and I don't see how it adds anything to the architecture/model (if anything it detracts from it).
In short: It does not make sense!
So how does this relate to WS-Addressing? Well ever since I started to use it when the Web Services Coordination and Transaction specifications, I've been annoyed by the fact that not everything is a WS-Addr End Point Reference (EPR). Basically in WS-Addr, an EPR is a delivery address for your message and contains the URI of the service as well as other information needed to ultimately deliver that message.
When sending a message, you can define wsa:ReplyTo, wsa:FaultTo and wsa:From all as EPRs. But the actual destination address isn't an EPR! Or more accurately, it's a broken apart EPR: as a receiver, you may have got it in a wsa:ReplyTo, but if you then want to us it to send a response, you have to do some work with it first.
For example, here's a valid wsa header for an input message:
<S:Header>
<wsa:MessageID>http://example.com/someuniquestring
</wsa:MessageID>
<wsa:ReplyTo>
<wsa:Address>http://example.com/business/client1
</wsa:Address>
<wsa:ReferenceParameters>
<foo:dest objid=urn:1234:5678/>
</wsa:ReferenceParameters>
</wsa:ReplyTo>
<wsa:To S:mustUnderstand="1">
mailto:fabrikam@example.com</wsa:To>
<wsa:Action>http://example.com/fabrikam/mail/Delete
</wsa:Action>
</S:Header>
Now in a symmetrical world, I should simply be able to take the wsa:ReplyTo EPR and put it (renamed as wsa:To of course) into the SOAP header of the response message like so:
<S:Header>
<wsa:To>
<wsa:Address>http://example.com/business/client1
</wsa:Address>
<wsa:ReferenceParameters>
<foo:dest objid=urn:1234:5678/>
</wsa:ReferenceParameters>
</wsa:To>
</S:Header>
But no, I can't do that. I have to take everything out of the wsa:ReplyTo and essentially promote the contents to the root, like so:
<S:Header>
<wsa:To>
<wsa:Address>http://example.com/business/client1
</wsa:Address>
</wsa:To>
<foo:dest objid=urn:1234:5678/>
</S:Header>
And as you can see, it's not actually as simple as that - I've got to break apart the ReferenceParameters (which could be arbitrary in size). Why can't I just use the EPR I received as an EPR? It can't be because of deficiencies in the SOAP processing model and I don't see how it adds anything to the architecture/model (if anything it detracts from it).
In short: It does not make sense!
Saturday, May 07, 2005
java.net blog
I've had a blog on java.net for a while, but have found very little time to put something together for a first post. A conversation I had with Greg the other day around the importance of WS-RX and WS-Context got me thinking and I thought I'd write something about WS-Context for the readers of java.net.
Thursday, May 05, 2005
Some more history
Just a couple of links to some things of historial interest to other ex-HP/Bluestoners out there.
Sunday, May 01, 2005
Why transactions are often under used
Back here I mentioned that on many occasions I've come across companies and products that could/should be using transactions when they're not. Obviously the inverse is also the case: the use of transactions when they're simply not needed can leave people with a bad experience and hence averse to looking to transactions in the future, when perhaps they should. So, the reasons for people not using or considering transactions when they are in fact useful can be broken down into two broad categories:
(i) overhead: it's true that you can never get something for nothing. But as we showed here with regard to principles, and I mentioned here with regard to our specific implementation, it doesn't have to be the case that you have to pay dearly (in terms of overhead and cash) to get the guarantees transactions offer. There's always a trade-off to be considered in pretty much any additional functionality you may want to add to an application, whether it's transactions for reliability/consistency, replication for availability/performance, security etc. Rather than think about what negative impact this functionality is going to have on your system, you need to consider what the impact will be on not having it. Now with the likes of security and replication it's fairly evident what benefits they bring most of the time. With transactions it's always a little more difficult, because most of the time you don't see the benefit: it's typically only when a failure happens. Now it is true that failures are not common, but they do happen - network failures, disk failures, memory failures etc. But in that case you should think of transactions like an insurance policy: how often do you claim on your home, car or travel insurance? Probably not that often. But most people will still renew their policies year after year, even when some (e.g., holiday insurance) aren't mandatory. Why? Because you don't want to take the risk and in simple terms the cost of insurance is significantly less than the cost of whatever it is that you're insuring. I'd like to see a time when transactions are seen to be so cheap that they're just part of the infrastructure that people take for granted. I think with something like ArjunaCore and the transaction implementation in Indigo that's starting to look possible.
(ii) education: in some ways this follows on from the above discussion, where people (architects, engineers etc.) either have a bad previous experience of transactions or have heard rumours. However, there are at least a couple of other sides to this: first, people (biz-dev as well as engineers) simply not knowing that transactions exist (computer science is a large field, so it's not always the case that everyone knows about every one of its facets). More worrying though is the second aspect, which I've seen a few times - vendors of products that really could (and should) benefit from transactions not using them because they either don't understand the reasons why they need them (education again) or they don't want the perceived overhead in their products and are willing to take the risk (on behalf of all of their customers) that failures won't ever happen; and they may do it "silently". It's this last point that always annoys me and not simply because I've been working with transactions for a long time: if you go back to the insurance analogy, it's the equivalent of you buying insurance but the insurance company simply pockets the money and if/when you come to make a claim they feign ignorance, leaving you up the proverbial creek without a paddle. By the time you know there's a problem, it's already too late!
I think that there's a lot that we as a transaction industry can do to help this, through education, evangelising and improved product functionality/performance. But I also think there's a lot that the user can do, for instance by considering what the effects of failures might be in certain scenarios and, particularly in the case of 3rd party products/components where you might have been told failures "are handled", drill down into the "hows" and "whys" rather than just taking it for granted. You don't buy insurance from just anyone; you use trusted parties, such as banks, where trust is built up and the products assured and backed by the law. It's a shame we don't have something equivalent for software.
(i) overhead: it's true that you can never get something for nothing. But as we showed here with regard to principles, and I mentioned here with regard to our specific implementation, it doesn't have to be the case that you have to pay dearly (in terms of overhead and cash) to get the guarantees transactions offer. There's always a trade-off to be considered in pretty much any additional functionality you may want to add to an application, whether it's transactions for reliability/consistency, replication for availability/performance, security etc. Rather than think about what negative impact this functionality is going to have on your system, you need to consider what the impact will be on not having it. Now with the likes of security and replication it's fairly evident what benefits they bring most of the time. With transactions it's always a little more difficult, because most of the time you don't see the benefit: it's typically only when a failure happens. Now it is true that failures are not common, but they do happen - network failures, disk failures, memory failures etc. But in that case you should think of transactions like an insurance policy: how often do you claim on your home, car or travel insurance? Probably not that often. But most people will still renew their policies year after year, even when some (e.g., holiday insurance) aren't mandatory. Why? Because you don't want to take the risk and in simple terms the cost of insurance is significantly less than the cost of whatever it is that you're insuring. I'd like to see a time when transactions are seen to be so cheap that they're just part of the infrastructure that people take for granted. I think with something like ArjunaCore and the transaction implementation in Indigo that's starting to look possible.
(ii) education: in some ways this follows on from the above discussion, where people (architects, engineers etc.) either have a bad previous experience of transactions or have heard rumours. However, there are at least a couple of other sides to this: first, people (biz-dev as well as engineers) simply not knowing that transactions exist (computer science is a large field, so it's not always the case that everyone knows about every one of its facets). More worrying though is the second aspect, which I've seen a few times - vendors of products that really could (and should) benefit from transactions not using them because they either don't understand the reasons why they need them (education again) or they don't want the perceived overhead in their products and are willing to take the risk (on behalf of all of their customers) that failures won't ever happen; and they may do it "silently". It's this last point that always annoys me and not simply because I've been working with transactions for a long time: if you go back to the insurance analogy, it's the equivalent of you buying insurance but the insurance company simply pockets the money and if/when you come to make a claim they feign ignorance, leaving you up the proverbial creek without a paddle. By the time you know there's a problem, it's already too late!
I think that there's a lot that we as a transaction industry can do to help this, through education, evangelising and improved product functionality/performance. But I also think there's a lot that the user can do, for instance by considering what the effects of failures might be in certain scenarios and, particularly in the case of 3rd party products/components where you might have been told failures "are handled", drill down into the "hows" and "whys" rather than just taking it for granted. You don't buy insurance from just anyone; you use trusted parties, such as banks, where trust is built up and the products assured and backed by the law. It's a shame we don't have something equivalent for software.
A belated review for Eric's latest book
Web Services specification architecture book
Some of my friends/colleagues from IBM have finally gotten their book out. This gives a view of the Web Services architecture as IBM sees it, along with the specifications they see as fitting into it. Since most (all?) of the specifications are ones that IBM has been working on (in and predominately out of standards bodies), it may not be an objective view, but no one can argue that it's not important to the industry.
Now all I need do is write a review.
Now all I need do is write a review.
Saturday, April 30, 2005
WWW 2006
I'm the chair of the XML and Web Services track of WWW2006, which next year is being held in sunny Edinburgh. Santosh is my deputy (kind of makes a change after all these years) and we're in the process of drawing up the list of PC members. I'm not sure when the CFP will go out, but I suspect it'll be around August time, with reviews happening over December as they did for this year's conference. So, get those thinking hats on!
Thursday, April 28, 2005
Grid and Web Services work
The session concept for Web Services
Greg has posted about a paper we wrote at the end of last year. I won't repeat the whole story here, but I really hope the delays in publication are sorted out. Anyway, take a look at the paper. Greg is hopefully going to put up a pdf version shortly.
The utility of standards
Savas has definitely hit the nail on the head with regard to standards and what it means to talk about "compliance", "conformance" and interoperability. I've been working in the area of standards for more years than I care to remember and some of the standards I've come across have often been let down by their conformance statements, with a knock-on effect on interoperability. It is always difficult to get large companies with vested interests in their own products to agree to a standard that either obsoletes their product(s) or means they have to modify them; it does make sense to protect your investments. However, the benefits of standards only come when you pay more than lip-service to them and the "conforms to" label shouldn't be used as a badge of honour unless it really means something. Unfortunately there are many vendors, products and organisations that rely on the fact that their customers/users don't have the time or skills to see beyond the label and understand what it really means to them. I've often heard the complaint that customers get locked into a standard, when in fact what they really mean is that they've been locked into one vendor's interpretation of that standard.
WS-CAF in New Orleans
The WS-CAF committee has been meeting in New Orleans this week. Unfortunately due to previous commitments I couldn't make it in person, which meant I had to take part via teleconference. Not exactly the easiest to do: phone's were definitely not meant for day-long meeting; roll on the invention of some form of holographic avatar.
Anyway, despite the fact that several of us had to endure the telephone line frequency response of 300 Hz to 3400 Hz for two days, we made excellent progress. We've pretty much closed up all of the issues related to WS-Context (thanks to Kevin Conner for working on the schema and WSDL). With a little luck we'll be able to move that to another committee draft within the month. (Apparently the change in OASIS rules recently has had a knock-on effect on names such as "committee draft" and "committee standard" which I've yet to fully understand, but I know what I mean even if OASIS staff might argue.) I've said before that with hindsight, WS-Context is probably the most important and potentially influential specification of the entire WS-CAF stack. It was never intended to be, but as is often the case, the simplest things that get overlooked initially turn out to be the most important.
I'm not saying that coordination or transactions aren't important, but the applicability of context to Web Services (and any distributed architecture) is so much wider. As more Web Services specifications come along, it's easy to see how they are using a form of context without necessarily being explicit about it. Hopefully this is just an education thing and once WS-Context becomes a standard and we are able to evangelise about it more and more, there'll come a time when WS-Context is used as naturally as SOAP and WSDL. One of my next posts will be about a paper I helped write with Greg on this subject.
I'm also contributing to the OASIS SOA-RM technical committee on this and other aspects of WS-CAF, so that's another way of expanding the influence of context as a principle and WS-Context as a concrete example.
As far as the New Orleans work went though, we also made excellent progress with WS-CF (the coordination framework). I'm confident we're close to being able to finalise this and put it up for public review. The model is a little more focused now that it was when we first submitted the specifications. It's down to pure registration and groupings, with coordination as something that can be layered on top. As with WS-Context, I think this helps to increase its applicability and I suspect that Greg and I may be writing a paper on this eventually.
So that left transactions. I gave a presentation overview (again) of the models and we agreed that the first thing to do was separate our 3 models into individual specifications. This has a lot of merit, not least of which is the fact that it's easier to manage and easier for people to decide what aspects of transaction management they want to implement and still be able to say in a straightforward manner what they conform to. We've also decided that the first model we need to concentrate on is WS-ACID. This is primarily intended to interoperability of existing transaction processing systems, and I'm really looking forward to a future interoperability workshop where we can demonstrate transaction interoperability Arjuna-to-Oracle-to-IONA (for example).
Anyway, despite the fact that several of us had to endure the telephone line frequency response of 300 Hz to 3400 Hz for two days, we made excellent progress. We've pretty much closed up all of the issues related to WS-Context (thanks to Kevin Conner for working on the schema and WSDL). With a little luck we'll be able to move that to another committee draft within the month. (Apparently the change in OASIS rules recently has had a knock-on effect on names such as "committee draft" and "committee standard" which I've yet to fully understand, but I know what I mean even if OASIS staff might argue.) I've said before that with hindsight, WS-Context is probably the most important and potentially influential specification of the entire WS-CAF stack. It was never intended to be, but as is often the case, the simplest things that get overlooked initially turn out to be the most important.
I'm not saying that coordination or transactions aren't important, but the applicability of context to Web Services (and any distributed architecture) is so much wider. As more Web Services specifications come along, it's easy to see how they are using a form of context without necessarily being explicit about it. Hopefully this is just an education thing and once WS-Context becomes a standard and we are able to evangelise about it more and more, there'll come a time when WS-Context is used as naturally as SOAP and WSDL. One of my next posts will be about a paper I helped write with Greg on this subject.
I'm also contributing to the OASIS SOA-RM technical committee on this and other aspects of WS-CAF, so that's another way of expanding the influence of context as a principle and WS-Context as a concrete example.
As far as the New Orleans work went though, we also made excellent progress with WS-CF (the coordination framework). I'm confident we're close to being able to finalise this and put it up for public review. The model is a little more focused now that it was when we first submitted the specifications. It's down to pure registration and groupings, with coordination as something that can be layered on top. As with WS-Context, I think this helps to increase its applicability and I suspect that Greg and I may be writing a paper on this eventually.
So that left transactions. I gave a presentation overview (again) of the models and we agreed that the first thing to do was separate our 3 models into individual specifications. This has a lot of merit, not least of which is the fact that it's easier to manage and easier for people to decide what aspects of transaction management they want to implement and still be able to say in a straightforward manner what they conform to. We've also decided that the first model we need to concentrate on is WS-ACID. This is primarily intended to interoperability of existing transaction processing systems, and I'm really looking forward to a future interoperability workshop where we can demonstrate transaction interoperability Arjuna-to-Oracle-to-IONA (for example).
Friday, April 22, 2005
Monday, April 18, 2005
JavaOne 2005
Saturday, April 16, 2005
New Orleans and diving
I was hoping to go to the next face-to-face meeting of WS-CAF in New Orleans. As last year, we've made it coincident with the annual OASIS Symposium in New Orleans. Last year was my first time in the city and I loved it: great food, great music and a great atmosphere. Myself, Eric, Greg and other friends and colleagues managed to find time from the slog of standards work (none of that sniggering at the back please!) to check out some of the Jazz festival last year, so I was looking forward to going back. That and hopefully producing another Committee Draft of WS-Context, maybe the first Committee Draft of WS-CF and trying to start on WS-TXM (a lot for a 2 day meeting, I know).
Unfortunately I won't be able to go :-( For Christmas my wife enrolled me on a PADI open water diving course. I've been talking about learing to SCUBA dive for many years, going back at least to when we spent a week in the Carribbean as guests of Bluestone for being a Bluestone President's Club Aware Winner. Several friends have also qualified over the intervening years, including Jim who kept pushing me to go do it. However, as with most things that aren't work related, I always managed to find a reason to put it off. Until this year that is. I have to say that since starting the course I've had a lot of fun and wish I'd done it earlier. Unfortunately, there are exams connected with the course, both written and practical (in the ocean - we've been doing everything in the local swimming pool so far), and some of them are coincident with the New Orleans face-to-face. I tried to postpone them, but it ain't going to happen, so I'm not going to be at the meeting in person. Which leaves me looking forward to 2 days of attending the meeting via the phone!
Unfortunately I won't be able to go :-( For Christmas my wife enrolled me on a PADI open water diving course. I've been talking about learing to SCUBA dive for many years, going back at least to when we spent a week in the Carribbean as guests of Bluestone for being a Bluestone President's Club Aware Winner. Several friends have also qualified over the intervening years, including Jim who kept pushing me to go do it. However, as with most things that aren't work related, I always managed to find a reason to put it off. Until this year that is. I have to say that since starting the course I've had a lot of fun and wish I'd done it earlier. Unfortunately, there are exams connected with the course, both written and practical (in the ocean - we've been doing everything in the local swimming pool so far), and some of them are coincident with the New Orleans face-to-face. I tried to postpone them, but it ain't going to happen, so I'm not going to be at the meeting in person. Which leaves me looking forward to 2 days of attending the meeting via the phone!
Web Services seminar
It's been a busy few weeks for us and our friends at CodeWorks. We're putting on a Web Services seminar for local businesses to help educate them on the benefits of Web Services. Fortunately we've been able to attract some good speakers for the event, including my old friend Werner Vogels from Amazon. Now that he's the CTO of Amazon he's finding it hard to spend time with us lesser mortals ;-) but I'm definitely going to make sure he has the opportunity to sample a few good pubs while he's here!
Subscribe to:
Posts (Atom)