Monday, December 29, 2008

Eventual Consistency and XTP

I meant to post something about this article a while ago but only found the time due to the Christmas vacation.

It's a nice article and what Werner's talking about is part of what I consider real extreme transaction processing (not the hype term). As I've said elsewhere several times before, a few of us in the transaction community have been working on this type of problem dating back to at least the work we did in early SOA/Web Services standards. Even longer if you consider the work on extended transaction models. It's nice to see practical implementations of some of the theory behind these models actually being used outside of academia.

I'm hoping that in the next year we'll see some fruit coming from research and development we've been doing on real XTP, including the REST approach I've mentioned before as well as general approaches to large scale (extended) transactions. This should further illustrate what's possible with relaxation of the ACID properties we know and love, particularly when combined with high performance messaging and replication techniques. Sometimes it's hard to believe that this has been going on for so long.

Since this article started with a reference to Werner, it's worth ending with him too: congratulations my friend for your latest award. Well done! I think I could have predicated it ;-)

Tuesday, December 09, 2008

Sad day

I just read that Oliver Postgate has died. It's sad because I grew up in an age when Bagpuss and the Clangers were the high points of television. Those were some of my fondest memories.

Saturday, December 06, 2008

SOA Symposium follow up

I seem to have forgotten to mention that I presented a couple of times at this year's first ever SOA Symposium a couple of months ago.

Friday, November 28, 2008

The Eternal Golden Braid

I first read An Eternal Golden Braid about 14 years ago and thought it was a great book. I still do. If you haven't read it then I would encourage you to do so: it is a wonderful journey through mathematics, computer science, philosophy, art and so much more. (I notice that there's a 20th anniversary edition, so that may be going on my present list to Santa since my original copy is a bit dog-eared.) So it was with some surprise that I noticed a translation of Godel's original work available! However, the reviews indicate that it may not be a good translation. But it may be worth a try anyway!

Sunday, November 16, 2008

EDA followup

JJ commented on my earlier posting. Unfortunately he doesn't seem to have comments enabled, or I'd have responded there. (JJ, if you do have comments on that entry, let me know and I'll cross link).

Anyway, he talks about "the main failure of distributed computing is precisely the prevalent monotheism of traditional distributed computing approaches". Ignoring the fact that I don't believe distributed computing has failed, I think JJ read more into my posting than was intended. I didn't mean to suggest that any approach to distributed computing (e.g., RPC or message based) should be used to the exclusion of all others. I think I made that clear elsewhere too: you should use the best approach to hand. After all, isn't that what being a good engineer or architect is all about? I'm very fond of saying that one-size doesn't fit all, and by that I mean what I've just said. Of course systems exist that are entirely based on the concept of resources, services, events, RPC or message exchange, as examples. But there are examples of implementations that combine these different approaches very successfully too.

So in a way I disagree when JJ says "It is not being about fence sitting, or applying the right solution to a given problem." At this stage in computer science that is precisely what it is all about. However, I know what he's after and I've said elsewhere that in many ways computer science is a soft science and more of an art at the moment.

So looking forward, would I like to see a formal approach that combines all of the different approaches I've discussed to distributed computing, and which JJ mentions? Yes, I would. But would everyone use it in its entirety? No, of course they wouldn't: they would use the right aspect of it for the task at hand. Does that invalidate it? No. Does it offer an improvement over today? Yes. Is it going to happen soon? I doubt it, and I'm fairly sure JJ knows the reasons for that too (and the majority of them aren't technical). In fact I could probably write an entire article on that very subject because it relates to how close to a hard science computing science is and can come, with analogies between Galileo/religion and vendor values. But it's way too dark a subject to cover today.

Tuesday, November 11, 2008

DOA 2008 is over

I just finished chairing the last session for DOA 2008. It's been a good conference with some excellent presentations and invited speakers. Now for the long trip home.

Sunday, November 09, 2008

Thoughts on EDA and CEP

Over the past year or so I've been seeing more and more discussions on Event Driven Architecture (EDA). I suppose the thing that really made me take notice was when Gartner tried to push SOA 2.0 and mentioned EDA in there. For me, event driven architectures (notice the case) have been around for as long as I can recall. I've used the concept of events and triggers in many systems I've been involved with without thinking it was somehow novel. The reason for this? Well as I say every time I'm asked about event driven architectures (still notice the case), I believe that they are all around us and have been since the very first computer system: they are a useful abstraction for building certain types of systems (not necessarily just distributed systems).

It's possible to model everything in terms of events when you think about it. For example, in a distributed object system that is based on RPC and uses client and server stubs to make the distribution opaque, the ultimate server object (the one that does the real work) is essentially registered with the server stub to be called (triggered) by it upon the receipt of the right message (i.e., the right opcode in the recipient message). Likewise, the client application (or thread of control) can be considered to register itself with the client stub to be triggered (woken, since we're working in a synchronous environment in this example) when the response message comes back. Pub/sub based messaging systems, such as JMS, support the concept of events as well. When you subscribe to a blog you're essentially registering to be triggered when an event (a new blog entry) becomes available. Anyone remember the CORBA Event Service or the DCE Event Management Service?

Back when I was starting my PhD one of the first things I did (for fun, as it really had nothing to do with any of the various subjects I was considering for my PhD at the time) was to write a hardware bus simulator in C++. It had the concept of components being connected to the various buses and registering to be triggered when specific events occurred. In the real world, hardware is event based, with IRQs as the events and the bus as the delivery mechanism. A very good example of an asynchronous system that is often ignored by the software community these days. Maybe we should be looking at calling the next wave Interrupt Driven Architectures instead?

I've written applications on a number of windowing systems over the past 20 years. Probably the best one was InterViews, but they all share a very common pattern: components (e.g., windows, scroll bars, minimisers, etc.) register for events (e.g., left mouse click, mouse move, etc.)

Over the past 5 years or so we've also seen a move away from the synchronous request/response approach to one-way messages with various Web Services standards and SOAP implementations. Services register their interest in specific events (receipt messages) and clients register their interest in specific events (response messages). But it's still very similar to the stub approach I mentioned earlier. But what we're seeing in this approach is making the concept of events and triggers more explicit than it was before (at least in the Web Services world). This is definitely a good thing for loose coupling when you look at asynchronous environments. But is is a good thing for developers and users? Maybe.

So why have we not seen more event based frameworks pushing events as first class entities up to the developer, which seems to be what the current discussions around EDA are about? I think the reason for this is abstractions: we typically try to hide things like events because most of the time we don't think in terms of them. When a sound wave hits my ear drum and is converted into electrical impulses that trigger a response in me, I don't normally equate the sound wave or the electrical impulses as events. When I receive a letter through my post box telling me I've won the lottery (I wish!) I don't think about the steps it took to be delivered as a series of events (though they clearly are). If you look back at what I said earlier concerning how event driven architectures have been with us for many years, you'll see that what's really happened is that we've tended to hide events with abstractions like stubs and message exchanges.

I've said before that I think an event driven approach to SOA (or pretty much anything) is good if it helps you to better reason about the system in question. If it gets in the way, though, there are alternatives that could be better. As with anything, you use the right abstraction for the job. Is this going to be like RPC, where we realise that hiding distribution is a bad thing (we knew all along really)? Is hiding events bad? Well as I just said, that will depend upon what you're doing. But just because you start thinking in terms of events doesn't solve all of your problems immediately. There may be benefit to exposing some business events to the higher level, but we shouldn't think that rewriting everything we currently use in an event-explicit manner will be a global panacea - it may even be a step backwards. As an industry we need to stop pushing point solutions as global solutions and remember that one-size doesn't fit all.

And what about Complex Event Processing (CEP)? In a nutshell CEP gives you the ability to correlate events from different streams over time (time should really be modeled as an event too) to determine patterns, manage and predict other events (such as failure conditions) etc. The data streams can contain messages at various levels of a system architecture, so it's possible to use CEP at the low level (e.g., allowing a system administrator to predict when machines a more likely to fail given daily usage patterns) as well as the higher level (e.g., allowing a business analyst to determine the buying habits of specific individuals given the market trends elsewhere in the world).

So if you've got CEP do you have an EDA? Well you obviously have something that is generating the event (data) streams that are input to the CEP implementation, but that doesn't mean the entire system is (or should be) architected using an event abstraction.

If you've got an EDA do you need CEP? This is more clear-cut: of course you don't need CEP if you've got events. There are more examples of event driven architectures that don't use (and don't need to use) CEP than those that do. But in some cases, it can make things easier to use CEP.

DOA 2008

I wasn't going to be able to come to DOA 2008, which was a shame because I enjoyed DOA 2007. However, neither of my co-chairs could attend either, so it fell to me to come. It's in Monterrey, Mexico this year and the trip here wasn't something I would like to repeat very often. But I'm here now and looking forward the conference. It'll also give me a chance to catchup on some work that I haven't been able to do (because of work) and blog entries that have been filling up my to-do queue. So one way or another the next few days should be interesting.

Wednesday, November 05, 2008

MAX sounds like it'll be fun

I was originally booked to presented at Adobe MAX as well as QCon SF (since both are back-to-back it was very convenient). However, I had to pull out. It's a shame because both events sound like they'll be a lot of fun and I wish I could attend.

Friday, October 17, 2008

New Macs

I've been a Mac user for a few years (keep promising myself that I'll return to Linux one day), so I was looking forward to the announcement of new Macs. But I obviously missed something because I'm not really impressed with what I've seen. No button on the track-pad? No 1900x1200 resolution? Slightly improved performance. But gone is the metal screen surround, to be replaced by black plastic?! I'm not sure what I expected from Apple, but it wasn't this. Plus, maybe the thing will look better when viewed up close rather than through a web page. If the new design does get around the erosion problem then it'll be worth an upgrade eventually.

Thursday, October 16, 2008

Transactive Memory

I've been doing some research on Software Transactional Memory recently and as a result started to read about Transactive Memory. Now it quickly became apparent that the two things are unrelated, but I found transactive memory to be very interesting (not to indicate that STM isn't interesting!) Put simply, transactive memory is the name for the process whereby people selectively remember things and rely on their relationships with other people, or other things, to remember everything else. So for example, you don't remember every single phone number for everyone you know: you're selective (because that's how the brain works) and remember the "top" 6 or 7, but will probably rely on a telephone book or a PDA for the others. In the home, you may not remember how all of your friends are related to one another, maybe relying on your partner to do that. If you are a technophobe then you may rely on someone in your family to program the DVD player so you don't have to remember how to do it. If you've got kids then you may rely on your partner to do the bulk of their welfare. And the list goes on.

So what has this to do with STM? Well as I said at the start, absolutely nothing. But it does have relevancy to something else I've been interested in for the past few years: repositories. I've always believed that systems such as repositories are better implemented in a federated manner: they scale much better. This means that although you may have a logical notion of a repository, each physical instance is responsible for some different aspect(s) of the whole "mind". This is important because, for example, how you store and index service binaries is different to how you would store and index service contracts, or workflow definitions etc. In this kind of set up if you ask one repository instance "do you know about X" it should say either "no, but I know a man who does" or "no, but hold on and I'll get the answer for you". I'd never been able to put a name to the approach (beyond federation), but it seems to me that this is precisely transitive memory from a software system. DNS works in the same way.

Wednesday, October 08, 2008

CSP book available for free

I'll never know how I missed this announcement from 4 years ago, but the original CSP book is one of the most dog-eared in my library. If you don't have a copy, then I encourage you to download this and read it: it's as applicable today as it was 20 years ago (if not more so!)

To Twitter or not to Twitter?

A number of friends are "on" Twitter and apparently I should be too. I'm not quite sure why though. If blogging is anything to go by, it'll probably take me a couple of years to get round to it.

Nice article on REST

I haven't blogged much recently because I really haven't had the time. The same is true at the moment, since I'm sitting in yet another airport lounge, this time coming back from the first SOA Symposium (hopefully there'll be a blog entry or two from me on that subject later). But I wanted to say that I really enjoyed the latest article from Jim, Savas and Ian. It's been a few years since Jim and Savas worked for me and blogs/articles are often the only regular way we keep in touch. Nice work guys!

Friday, September 26, 2008

Farewell John Warne

I just heard that a friend of mine, John Warne, died this morning. The immediate cause was pneumonia, after he had been diagnosed with liver cancer five weeks ago.

I first met John Warne back in 1987 when I was doing my PhD and he was working at ANSA on behalf of Nortel. He was a great guy, always very passionate about his work but also very down to earth. Over the years our paths crossed time and time again, and no matter how long it had been between meetings it was always like it has only been a few days. He was a great guy to listen to, with some wonderful stories about things he'd seen and done over the years. I remember him telling the story about how he'd been sent by Nortel to work on the Hong Kong Race Course and that they (the Race Course) had so much money they would fly him first class, best hotels etc. He was a good mentor and always willing to spend time to understand problems or share his knowledge openly. In short, John was one of those rare people in our industry: a really nice and intelligent person, someone you looked forward to seeing.

It's a very sad day. I'm not sure what else I can say. Farewell John, it was an honour and a pleasure to know you.

Saturday, August 30, 2008

A gift interlude

For Christmas 2007 my wife got me an Aqua Sphereing experience. The local centre is also only 5 miles from our house. Given that she was also the one who got me into SCUBA diving, I figured this would be fun. Unfortunately the weather in the UK this summer hasn't been that good, but eventually I managed to find the time and the weather held. Although the whole thing lasted only a minute or so, I had a fantastic time and now know what it feels like to be inside a giant washing machine. My friend Duane did something similar at JavaOne earlier this year, but I don't think there was water involved. If you get the chance, I'd definitely recommend the experience!

For our anniversary this year, my wife got me an EcoSphere (hmmm, there's definitely a spherical theme going on here!). It turned up the other day and it's fantastic. I love this sort of thing: mini-worlds, whether they're real or virtual, have always fascinated me. Maybe it's a god-complex or something, but I can spend hours just watching these worlds evolve.

Friday, August 15, 2008

Erlang and Mac OS 10

Just back from holiday and still playing catch-up, but this caught my eye. I've been using Erlang on my Mac for a while but this would have saved me some time back when I started exploring the language (which I like more and more each day).

Wednesday, June 25, 2008

WS-CDL literature surveys

I've been on the program committees for too many conferences and workshops to keep count. You almost always see the expected bell-curve of paper submissions (5% are really bad, 90% are ok, and 5% are extremely good). But irrespective of the quality and content of a paper, the one thing that always annoys me is bad citations and references. I think this goes back to when we were doing the initial work around Arjuna and looking at how to leverage object-oriented techniques for fault tolerance. These days it'd be very passe, but back then it was the start of OO and the work we did was cutting edge. We weren't the only ones doing work in that area: there was Camelot/Avalon (later went on to become Encina from Transarc) and Argus (ISIS wasn't really about exploiting OO approaches). Whenever we ran into papers by those teams it was very rare to see them reference us, whereas the inverse was almost never the case (timing permitting). Frustrating to say the least. So I always try to ensure that accepted papers have appropriate references. It benefits the author's work as well as the reader (there's nothing more infuriating than reading a paper, asking yourself "OK, but how does this compare with XYZ?" and then finding that the author's don't mention it.)

What has this to do with WS-CDL? Well I'm on another PC (I won't mention which, though I have posted the CFP on the blog already) and recently received a swathe of papers on orchestration and choreography. All of them mentioned BPMN. Several of them mentioned BPEL. A couple of them mentioned Pi-calculus. None of them mentioned WS-CDL (or its predecessors)! Of those that mentioned Pi-calculus, they all duplicated the effort that has gone into CDL. Plus every single paper mentioned the importance of being "standards compliant". It's not as if it's hard to find a mention of CDL via google (try googling for 'choreography web services'): in the "good 'ol days" we used to wonder if the lack of references to Arjuna was down to the complexity of tracking journal and conference papers - this was before the WWW and in the relative infancy of the internet (oh cr*p, doesn't that make me sound old?!)

Why is it that these papers didn't even mention CDL once? I think it's a combination of factors including: poor research on the part of the authors', excellent publicity on the part of major vendors that CDL is a dead standard, and confusion around the relationship of CDL to BPEL. This does a disservice to the people who have worked on CDL over the years: it's an excellent body of work and brings value to the choreography arena both from a static (development) perspective as well as dynamic (runtime). Of course it's in the vendor's best interests to ignore WS-CDL when they've adopted WS-BPEL heavily, but these things are complimentary (in fact CDL compliments any orchestration implementation, so don't get hung up over the WS part of the name.) But for a researcher, it's not acceptable.

So if you're doing research into choreography (and/or orchestration) and are thinking of writing a paper, you need to look at WS-CDL. Even if it's only to compare and contrast with what you're going to do, it's important. (A cogent argument against it in favour of your own work is fine, as long as the literature survey is complete.) Otherwise you could be disappointed when your paper is rejected.

Sunday, June 15, 2008

What's the point?

I've had the pleasure of working with some very smart people over the years in the area of fault tolerant distributed systems. As a result I've performed research and development in a number of different techniques, including replication (for high availability) and transactions (for consistency). In all that time I've been conscious of the fact that a lot of time and effort has been spent proving that whatever was done worked in the case of failures (whatever the specific definition may be for the particular environment): after all, that's the point of the whole exercise. Yes I know that failures don't happen that often (try selling a transaction manager to people who haven't used one for years and explaining why they really really need to buy one!) But they do happen and that's why fault tolerance techniques (and testing they work in the presence of failures) are so important.

Now why do I bother mentioning this? Well it's come to my attention over the past few years that some purveyors of fault tolerance solutions either don't bother to test the "edge cases" (which are not really edge cases, but the reason for their product's existence) or don't care (and hence publicize) that their solutions won't work in the case of some (all?) possible failure modes. I'm not going to name-and-shame them (primarily because I haven't been able to confirm those reports myself), but if you are a user of something that purports to offer high availability or data consistency in the presence of failures, you really need to check that that vendor means and how they go about confirming that their product works as they say it should.

Monday, June 09, 2008

SOA 2.0 all over again?

Over two years ago I got frustrated at the announcement of SOA 2.0. Many others were likewise confused and irritated at an attempt to create another hype curve. I'm not going to attribute cause and effect because maybe it would have happened anyway, but SOA 2.0 pretty much bit the dirt subsequently. Well while writing up this article for InfoQ I had a serious case of deja vu.

WTF is WOA? Where did it spring from and more precisely where has it been hiding for the past ten years? At its best it seems to be the same as ROA, i.e., a concrete implementation of REST targeting the Web. (I'm not so keen on ROA either: I prefer REST/HTTP.) At its worst it's an excuse to start generating more terms ('Web-oriented SOA'?! You've got to be joking!) One of the original articles on Web Oriented Architecture (aka WOA) was posted by Dion on April 1st, so maybe that was a Freudian Slip on his part?

But if WOA is not the same as REST (or REST as applied to the Web) then it really needs a different acronym: REST is fundamentally the architecture on which the Web is based and everyone understands that now. Alright you do need to clarify how the concepts are implemented (HTTP, JMS etc.), but that's easy to do without coining a whole new terminology. Independently of Roy's thesis, the W3C has done a good job of defining the Web Architecture. Plus, people have been building RESTful good citizen applications for quite a while. Did they need to coin a new term to make it clear what was happening under the hood? I think not. But then again, maybe the intent is to try to outwit us with an attempt at the Chewbacca Defense? I think it's more a case of The Emperor's New Clothes syndrome and we should just say that WOA is naked and move on!

Look guys, we have REST as a well defined and accepted term. Why do we need yet another acronym to mean the same thing? The answer is that we don't, so let's stop polluting the atmosphere with meaningless or duplicate terms and get on with helping end users and developers figure out the best way in which to deliver business functions and data! I can say 'REST as applied to the Web' in less time than it takes to explain WOA and I can guarantee you that more people will understand what I mean with the first description than the second.

Sunday, June 01, 2008

What's it mean to be an architect?

We live in a 19th century station master's house and it's time to either move or add another module (aka extension). We're looking at the latter for a number of reasons. Going this route means we need to have someone come in and draw up plans: the architect. Most people probably think of what an architect does based on the dictionary definition: "One who designs and supervises the construction of buildings or other large structures." But if you think about it more, this person is more than a designer or supervisor: they have to understand a lot about materials and how they fit together in order to assess structural integrity, stress points, resilience to adverse conditions and, of course, the final all-important look. So although they may not get their hands dirty with bricks and mortal (some do), they have to know about the construction as well as the people who will eventually do that work. That doesn't mean they are necessarily as skilled as some others in their team (e.g., I doubt the typical architect would have the skills of a carpenter), but they would need to understand wood, bricks, stone etc.

This requirement for an architect to know how things come together and, if necessary, be able to do some of that development should be common to uses of the term within other sectors and not just building. Whether it be biotechnology, cars or ships, the architect had better be skilled enough to understand how their plans will be affected by the implementation. How they do that may depend upon the industry and the individual (e.g., some building architects have been carpenters in the past).

The software architect needs to be the same. If you've got 'architect' in your title then you need to either be (or have been) a coder. (I think 'be' is better). Whether that means you've coded entire systems, or had to bring together existing modules as well, doesn't really matter as long as you have the understanding of what's possible, practical, reliable and maintainable as a developer. Writing a few XML configuration scripts doesn't cut-it in my opinion, because that does not bring the necessary appreciation for architecture. For instance, how can you understand what it means for a failure to happen in a distributed system if you've never had to implement one (or part of one)? So far I've been in the fortunate position to have only met software architects who fit my definition (I suppose a chicken-and-egg situation could be argued).

However, I know others who have met (suffered) exceptions to this rule. In some companies it appears to be endemic that architects are designers or team managers, with little or no coding experience behind them. Maybe they're just lucky and the development teams pick up the slack, ensuring success of the projects. Or maybe the blame for failed projects just manages to go elsewhere. But I know when we decide to go with an extension on our house I'll be looking for an architect who knows more than how to draw pretty pictures or make sure the work is done on time: I don't want the whole thing collapsing on us months or years later!

Saturday, May 31, 2008


ESWSA: Workshop on empirical studies of Web service architectures
(the RESTñSOAP debate in numbers)
in conjunction with OOPSLA 2008

The recent rapid growth in size and capability of distributed computing
systems has heralded new types of software architectures, among them the
messaging paradigm championed by Web Services and the distributed hypermedia
model upheld by the Web. Currently these two competing styles ñ known as
SOAP and RESTful ñ are used, but little is known about the real-world
engineering characteristics of each style, though each has an active camp of
campaigners. The known comparisons focus on sometimes abstract architectural
principles, and there is little empirical information in the public domain
from specific system implementation experience.

Only one piece of empirical data regarding this debate is available to date.
It comes from Amazon. Jeff Barr, quoted by Tim O'Reilly, noted that 85% of
Web services requests at Amazon are HTTP-based, or RESTful. That was in April

The ongoing conflict between the two groups is often called the ìREST-SOAP
debate.î Yet actual debates, organized for example during conferences, have
not been conclusive, because they typically fail to convince the proponents of
the competing style. Rather than arguing over abstract concepts, this workshop
will address the merits of each style based on empirical experience how
systems work in practice.

The workshop will present empirical work on RESTful and SOAP-based Web
Services. We are seeking papers that present empirical engineering evidence
regarding specific aspects of both kinds of services. This evidence will be
the starting point of the discussion during the workshop that aims to:
* Identify what is known empirically about building RESTful and SOAP services;
* Discuss the empirical results to see how widely they apply;
* Confirm or rebuke abstract claims with empirical evidence; and
* Identify questions for further study.

Workshop submissions should focus on one of the following types of empirical

Firstly, we are soliciting empirical studies or comparisons of SOAP and RESTful
Web services in the context of:
* Publicly accessible services
* Cross-Organization Integration (B2B), or inter-enterprise services
* Enterprise Application Integration (EAI), or intra-enterprise services
* Non-functional requirements of services (e.g. security, reliability, crash

Second, studies of the REST architectural style, e.g.
* How closely does the Web follow the principles of REST?
* How many Web services claiming to be RESTful follow the principles of REST?

Good sources of arguments regarding the REST-SOAP debate are
* RESTwiki,
* Paul Prescod's paper, ìRoots of the REST-SOAP debate,î XML 2002.
* "RESTful Web services" book by Leonard Richardson and Sam Ruby
* Web services-related tracks at qCon conferences


We are seeking short papers (up to 6 pages, 9pt font, in ACM format). A
submission must pose an empirical question related to Web services, present
some data that addresses the question and interpret the results. Submissions
will be judged based on soundness of methods, quality of analysis, as well as
relevance of the empirical results to the REST-SOAP debate. They will be
reviewed by Program Committee members, who are industry and academic experts
in the area of Web services.

Submissions will be accepted through the EasyChair submission system available

Authors of accepted papers will be notified by September 2nd (in time to take
advantage of OOPSLA's early registration discount). Authors will have an
opportunity to update their submissions with the reviewers' feedback until
September 20th, 2008. The reviewed submissions will be featured in OOPSLA
Companion 2008 and in the ACM's Digital Library. Note that at least one author
of the submitted paper must be present at the workshop to present it.


* ESWSA paper submission deadline: August 3, 2008
* Notification of acceptance/rejection: September 2, 2008
* OOPSLA's early registration deadline: September 11, 2008
* ESWSA Workshop: Oct 19th or 20th, 2008


The workshop will run over an entire dayís session at OOPSLA 2008.
Morning session: paper presentations (20 mins per paper + 10 mins for questions)
Afternoon session: more presentations plus a panel discussion of invited experts


Munawar Hafiz, University of Illinois
Paul Adamczyk, University of Illinois
Jim Webber, ThoughtWorks


Mark Baker, Coactus Consulting
Raj Balasubramanian, IBM
Chris Ferris, IBM
Ralph E Johnson, University of Illinois
Mark Little, Redhat
Steve Loughran, HP
Mark Nottingham, Yahoo
Savas Parastatidis, Microsoft
Ian Robinson, ThoughtWorks
Halvard Skogsrud, ThoughtWorks
Stefan Tilkov, innoQ
Paul Watson, Newcastle University
Sanjiva Weerawarana, WSO2


Jim Webber, ThoughtWorks
Sanjiva Weerawarana, WSO2
Kyle Brown, IBM
Brian Foote, Industrial Logic
Paul Adamczyk, University of Illinois

For more information about the workshop please visit
or contact the organizers.

Wednesday, May 14, 2008

JavaOne bug

I'll post about JavaOne later, but if you attended you really should check out Duane's blog concerning health issues around the conference. Luckily I wasn't affected, but I know others were!

Tuesday, May 13, 2008

DOA 2008

OTM 2008 Federated Conferences - Call For Papers
Monterry (Mexico), November 9 - 14, 2008


"OnTheMove (OTM) to Meaningful Internet Systems and Ubiquitous Computing"
co-locates five successful related and complementary conferences:
- International Symposium on Distributed Objects and Applications (DOA'08)
- International Conference on Ontologies, Databases and Applications of
Semantics (ODBASE'08)
- International Conference on Cooperative Information Systems (CoopIS'08)
- International Symposium on Grid computing, high-performAnce and Distributed
Applications (GADA'08)
- International Symposium on Information Security (IS'08)

Each conference covers multiple research vectors, viz. theory (e.g. underlying
formalisms), conceptual (e.g. technical designs and conceptual solutions) and
applications (e.g. case studies and industrial best practices). All five
conferences share the scientific study of the distributed, conceptual and
ubiquitous aspects of modern computing systems, and share the resulting
application-pull created by the WWW.



- Abstract submission: June 8, 2008
- Paper submission: June 15, 2008
- Acceptance notification: August 10, 2008
- Camera ready: August 25, 2008
- Registration: August 25, 2008
- OTM Conferences: November 9 - 14, 2008


CoopIS PC Co-Chairs (
* Johann Eder, University of Klagenfurt, Austria
* Masaru Kitsuregawa, University of Tokyo, Japan
* Ling Liu, Georgia Institute of Technology, USA

DOA PC Co-Chairs (
* Mark Little, Red Hat, UK
* Alberto Montresor, University of Trento, Italy
* Greg Pavlik, Oracle, USA

ODBASE PC Co-Chairs (
* Malu Castellanos, HP, USA
* Fausto Giunchiglia, University of Trento, Italy
* Feng Ling, Tsinghua University, China

GADA PC Co-Chairs (
* Dennis Gannon, Indiana University, USA
* Pilar Herrero, Universidad Politécnica de Madrid, Spain
* Daniel S. Katz, Louisiana State University, USA
* María S. Pérez, Universidad Politécnica de Madrid, Spain

IS PC Co-Chairs (
* Jong Hyuk Park, Kyungnam University, Korea
* Bart Preneel, Katholieke Universiteit Leuven, Belgium
* Ravi Sandhu, University of Texas, USA
* André Zúquete, University of Aveiro, Portugal

WS-FM 2008

WS-FM 2008

5th International Workshop on Web Services and Formal Methods
September 4-5, 2008, Milan, Italy

Co-located with the 6th International Conference on
Business Process Management (BPM'08)

Important Dates

* Abstract submission deadline: May 19, 2008
* Paper submission deadline: May 26, 2008
* Author notification: June 23, 2008
* Camera-ready pre-proceedings: July 21, 2008
* Workshop dates September 4-5, 2008

Scope of the Workshop

Web Service (WS) technology provides standard mechanisms and protocols
for describing, locating and invoking services available all over the
web. Existing infrastructures already enable providers to describe
services in terms of their interface, access policy and behavior, and
to combine simpler services into more structured and complex
ones. However, research is still needed to move WS technology
from skilled handcrafting to well-engineered practice, supporting
the management of interactions with stateful and long-running services,
large farms of services, quality of service delivery, inter alia.

Formal methods can play a fundamental role in the shaping of such
innovations. For instance, they can help us define
unambiguous semantics for the languages and protocols that underpin
existing WS infrastructures, and provide a basis for
checking the conformance and compliance of bundled services. They can
also empower dynamic discovery
and binding with compatibility checks against behavioural properties
and quality of service requirements. Formal analysis of security
properties and performance is also essential in application areas such as
e-commerce. These are just a few prominent aspects;
the scope for using formal methods in the area of Web Services is
much wider, and the challenges raised by this new area can
offer opportunities for extending the state of the art in formal techniques.

The aim of the workshop series is to bring together researchers
working on Web Services and Formal Methods in order to catalyze
fruitful collaboration. The scope of the workshop is not purely
limited to technological aspects. In fact, the WS-FM series has a strong
tradition of attracting submissions on formal approaches to
enterprise systems modeling in general, and business process modeling
in particular. Potentially, this could have a significant impact on
the on-going standardization efforts for Web Service technology.

List of Topics

This edition of the workshop will have a special focus on the
integration of different ways for conceiving Web Services, like
orchestration vs choreography, Petri nets and workflow models vs
process calculi ones, client-server interaction vs multiparty
conversation, secure but static service binding vs open dynamic
binding, etc.

Other topics of interest include, but are not limited to:

* Formal approaches to service-oriented analysis and design
* Formal approaches to enterprise modeling and business process modeling
* WS coordination and transactions frameworks
* Formal comparison of different models proposed for WS protocols and
* Formal comparison of different approaches to WS choreography and
* Types and logics for WS
* Goal-driven and semantics-based discovery and composition of WS
* Model-driven development, testing, and analysis of WS
* Security, performance and quality of services
* Semi-structured data management and XML technology
* WS ontologies and semantic description
* Innovative application scenarios for WS

We encourage also the submission of tool papers, describing tools
based on formal methods, to be exploited in the context of Web
Services applications.


Submissions must be original and should neither be already published
somewhere else nor be under consideration for publication while being
evaluated for this workshop.

We are negotiating with Springer the publication of all accepted
papers in the workshop post-proceedings as a volume of Lecture Notes
in Computer Science (LNCS), to appear a few months after the workshop.

Papers are to be prepared in LNCS format and must not exceed 15 pages.

All papers must be submitted following the instructions at the
WS-FM'08 submission site, handled by EasyChair:


Information about previous editions of the workshop can be found at


Starting from 2007, the workshop has taken over the activities of the
online community formerly known as the "Petri and Pi" Group, which
allowed to bring closer the community of workflow oriented researchers
with that of process calculi oriented researchers. People interested
in the subject can still join the active mailing list on "Formal
Methods for Service Oriented Computing and Business Process
Management" (FMxSOCandBPM) available at

Steering Committee

W. van der Aalst (Eindhoven University of Technology, The Netherlands)
M. Bravetti (University of Bologna, Italy)
M. Dumas (University of Tartu, Estonia)
J.L. Fiadeiro (University of Leicester, UK)
G. Zavattaro (University of Bologna, Italy)

Program Committee


R. Bruni (University of Pisa, Italy)
K. Wolf (University of Rostock, Germany)

Other PC members:

F. Arbab (CWI, The Netherlands)
M. Baldoni (University of Torino, Italy)
A. Barros (SAP Research Brisbane, Australia)
B. Benatallah (University of New South Wales, Australia)
K. Bhargavan (Microsoft Research Cambridge, UK)
E. Bonelli (Universidad Nacional de Quilmes, Argentina)
M. Butler (University of Southhampton, UK)
P. Ciancarini (University of Bologna, Italy)
F. Curbera (IBM Hawthorne Heights, U.S.)
G. Decker (HPI Potsdam, Germany)
F. Duran (University of Malaga, Spain)
S. Dustdar (University of Vienna, Austria)
A. Friesen (SAP Research Karlsruhe, Germany)
S. Gilmore (University of Edinburgh, Scotland)
R. Heckel (University of Leicester, UK)
D. Hirsch (Intel Argentina, Argentina)
F. Leymann (University of Stuttgart, Germany)
M. Little (RedHat, UK)
N. Kavantzas (Oracle Inc., U.S.)
A. Knapp (LMU Munich, Germany)
F. Martinelli (CNR Pisa, Italy)
H. Melgratti (University of Buenos Aires, Argentina)
S. Nakajima (National Institute of Informatics, Japan)
M. Nunez (Complutense University of Madrid, Spain)
J. Padget (University of Bath, UK)
G. Pozzi (Politecnico Milano, Italy)
R. Pugliese (University of Florence, Italy)
A. Ravara (Technical University of Lisbon, Portugal)
S. Ross-Talbot (pi4tech)
N. Sidorova (Eindhoven University of Technology, The Netherlands)
C. Stahl (Humboldt-University Berlin, Germany)
E. Tuosto (University of Leicester, UK)
H. Voelzer (IBM Zurich, Switzerland)
D. Yankelevich (Pragma Consultores, Argentina)
P. Yendluri (Software AG, U.S.)

Saturday, April 26, 2008


Have been doing some thinking and planning around what Extreme Transaction Processing should be. Now all I need do is fine time to do something about it!

A tribute to Jim

Not a lot more I can say. He is missed by all those who knew him.

Saturday, April 19, 2008

OPENflow deserves another chance

While we were dutifully working on Arjuna, Arjuna2, JavaArjuna, OTSArjuna and other things, Stuart and Santosh (amongst others) were also hard at work on OPENflow. As with many things we did back then it began life as an attempt to improve standards: this time workflow in collaboration with Nortel (and was better received by users than the competitor specification from the WfMC). Over the next few years it went much further than the original submission and became one of our product offerings. Although the research and development took a bit of a back-seat to the JTS development when we were acquired by Bluestone, it still managed to be cutting edge.

Unfortunately when we were acquired by HP they already had a workflow product (Process Manager Interactive). The decision was taken to stop OPENflow and that was essentially that. (It was also the point that HP Middleware, a predominately Java-based division, decided to mothball our C++ transaction service.) But even today OPENflow offers capabilities that modern equivalents could benefit from. Quite a few capabilities to be perfectly honest. I think it's time to revisit past decisions and re-learn forgotten techniques!

I haven't even begun to blog about B2B Objects yet, either.

Degrees of coupling

Over the past few years we've seen the distributed system industry moving to embrace loose coupling as though it's a global panacea to all of the woes of the previous decades. I've said on many occasions that coupling (loose or close) is something that cannot be taken in isolation: as with most things there's a trade-off to be made and there are degrees of coupling (no innuendos intended). I made that point again with my first presentation of 2008 and as recently as QCon, also taking that opportunity to point out again that loose coupling isn't something discovered or invented by the distributed systems community. It's a general software engineering pattern that has been used since Noah used his ZX76BC.

Now what makes me write about this again, when it's old hat? Well Jim's written a nice piece on coupling and cohesion. It's worth a read, but what prompted me to add this entry wasn't the subject itself but the fact that it references Pete Lee. As with Jim, Pete was one of my undergraduate teachers and took me through two years of software engineering. And it was this course that first brought loose coupling and cohesion (and many other things) to my attention. When I was preparing my presentation for the winter school I wanted to pull some specific references from the software engineering book we used during that undergraduate course, but it's stuck up in the loft and I was too lazy to go and find it. It's been over 20 years since I last saw it, but I'm pretty sure it was the first edition of Ian Sommeville's excellent book. Maybe time to buy the latest edition!

Winter School presentation

Back in January I make a two day presentation on the evolution of distributed systems at CUSO Winter School. The audience was a mixture of students and professors with a variety of backgrounds (most not in distributed systems research). So I had a great opportunity to start with a blank slate and try to give an historical background as well as comparing and contrasting with different approaches. The feedback during the event was great, but also the act of just sitting down and having to create the presentation helped coalesce a lot of things that I've worked through over the past 20 years.

QCon London 2008

It's a bit late, but here's a quick summary of QCon London 2008. Although I've been an InfoQ editor for a couple of years, this was my first time at a QCon and I was impressed. The presentations I saw were all packed and technically very good: there was none of the "product placement" that you tend to see more and more at conferences. Even the vendor pavilion was less of a car salesroom and more another opportunity to share technical discussions. I was impressed.

I jumped across the tracks during the days I wasn't presenting. However, on the Thursday I stayed with my track. Given what I'd heard about the same track at the last QCon, I was expecting a lot of controversy. However, this time I think the community has moved on and accepted that one size doesn't fit all, which coincidentally was the subject of my presentation. The entire day of the track went well and I thought all of the presentations came together very cohesively.

Diving at long last

It's been really difficult to find the time and the weather to go diving. With the 6 month clock ticking, we finally bit the bullet and got wet. The weather wasn't great, so we opted for Ellerton again. Although my dive computer said it was 9C certain parts of my body registered much lower! Whereas others were diving in dry suits, we were roughing it in our 5mm wet suits. We managed to get nearly an hour dive time despite the temperature. And I'm sure my hands were blue when I started!

I love diving for a number of reasons. Not least of which is the silence and ability to think about things while I'm down there. I managed to resolve a few issues I haven't been able to get to during my normal working day and a couple of new blog entries will be coming as a direct result. Now it had better not be another 6 months before my next dive!

Friday, April 11, 2008

Another blast from the past

I was in Neuchatel this week for some meetings and one of our conversations moved on to failure detection/failure suspecting: the fact that you cannot reliably detect failures until (and unless) those failures are eventually recovered from. Typical "detection" uses timeouts and if you use the wrong value you can end up in a world of pain. That's where failure suspectors come in: the idea is that if you think something has failed then you make sure everyone else agrees with you so even if you are wrong you don't end up with split-brain syndrome. This reminded me of some work I did back in the 90's around quantum mechanics and failure detectors.

Sunday, March 23, 2008

A couple of not so obvious facts around REST/HTTP

While composing an entry on QCon I came across a couple of factoids around REST/HTTP that I had thought obvious but when I mentioned them at the event a few people found them surprising. So rather than bury them in that post (when it eventually appears), I thought I'd bring them up here:

  • I've been developing applications on the Web since it was first released: being at University at the time, I had a lot of freedom to play. I even wrote a browser in InterViews! (Anyone else remember gopher?) Anyway, I remember being glad when the first URN proposal came out because it looked to address some of the issues we mentioned at the time, through the definition of a specific set of name servers: no longer would you have to use URLs directly, but you'd use URNs and the infrastructure would look them up via the name server(s) for you. Sound familiar? Well fast forward 10 years and that never happened. Or did it? Well if you consider what a naming service (or trading service) does for you, WTF is Google or Yahoo?

  • My friend and co-InfoQ colleague/editor Stefan has another nice article on REST. In it he addresses some of the common mis-conceptions around REST, and specifically the perceived lack of pub/sub. You what? As he and I mentioned separately, it seems pretty obvious that RSS and Atom are the right approach in RESTland. The feedback I got at QCon the other week put this approach high on my pet projects list for this vacation, so I've been working on that for our ESB as well as some other stealth projects of my own.

Now the folks I met at QCon were all very bright. So their surprise at these "revelations" came as a bit of a surprise to me. But hey, maybe it wasn't a good statistical sample.

Monday, March 17, 2008

Beautiful Code

Just back from QCon London and taking the day off (another one of those "use 'em or lose 'em" days). I'll say more about QCon in a separate entry, but I wanted to mention something that came up there but which has been playing on my mind for a while anyway: the art of beautiful code and refactoring. I heard a number of people saying that you shouldn't touch a programming language if you can't (easily) refactor applications written using it. I've heard similar arguments before, which comes back to the IDEs available. I'd always taken this as more of a personal preference than any kind of Fundamental Law, and maybe that (personal preference) is how many people mean it. However, listening to some at QCon it's starting to border on the latter, which really started me thinking.

Maybe it's just me, but I've never consciously factored in the question "Can I refactor my code?" when choosing a language for a particular problem. I think that's because when I started using computers you only had batch processing (OK, when I really started we were using punch card and paper-tape, but let's gloss over that one). Time between submitting and compiling was typically half an hour, not including the 6 floors you had to descend (and subsequently ascend). So you tried to get your programs correct as quickly as possible, or developed very good calf muscles! Refactoring wasn't possible back then, but even if it was I don't think most of us would have bothered because of the batch system implications.

I try (and fail sometimes) to get the structure of my programs right at the start, so even today I typically don't make use of refactoring in my IDE. (Hey, it's only recently that I stopped using emacs as my de-facto editor, just to shut up others!) But this is where I came in: it's a personal thing. Your mileage may vary and whatever you need to do to help you get by is fine, surely? Why should it be the subject of yet another fierce industry battle? Are we really so short of things to do that we have to create these sorts of opportunities?

Oh well, time to take the day off.

Saturday, March 08, 2008

Distributed Java Project

While doing the project migration for C++SIM/JavaSim, I came across another old project of mine: a distributed Java framework. Back when Java was still Oak, there was no such thing as Java RMI. The kind of research we did in the Arjuna Project was all distributed in nature and we already had a C++ stub generator and Rajdoot RPC mechanism. So as the W3Objects work expanded (more on that in another entry), I took to implementing distributed Java. The system was interoperable with our C++ equivalent and generated client and server stubs based on C++ header files or Java interfaces. It was used in some of our research for a few years, but fell away as Java moved on and it became more of a chore to update. Ah ... those were the days.


Back in 1990 my friend Dan McCue and I were doing work on replica management and a way to compute the optimum number and location of replicas to achieve a desired level of availability. (Yes, availability is not necessarily proportional to the number of replicas.) We needed to do some simulation work and started out with Simula, which is a nice language but which neither of us had much experience at the time. Both of us were (are?) C++ die-hards, so we decided that the best way would be to build out own simulation toolkit in C++, and C++SIM was born.

C++SIM was very successful for us (thanks to Isi for helping with some of the statistical functions). It has been used in a number of academic and industrial settings. It was probably one of the early open source offerings too, since it was made freely available by the University. I learnt a lot from developing it, not least of which was multi-threaded programming: this was the age before the general availability of thread-aware languages and operating systems. Sun's Lightweight Process package in SunOS had been around for a few years and Posix Threads was still in its infancy. But when you want to run simulations on different operating systems, it was impossible to target the same thread package. So I wrote a thread abstraction layer for C++SIM, as well as a couple of threading packages (ah, setjmp/longjmp were my best friends back then).

In 1996 I ported C++SIM to Java, and JavaSim was born (I've never been that good with sexy names!) Because of the massive adoption around Java, JavaSim saw more uptake than C++SIM. It was also easier to implement and maintain than C++SIM. Again, over the intervening years it's had a lot of use and I'm still getting feedback from people asking for updates or just reporting how they are using it (them).

Now the problem was that their current homes were limiting. The source code repository changed several times and I didn't have direct access to maintain it. The web site was also outside of my control once I left the University. So I finally got agreement from them to move it outside and change the licence to something a bit more modern. I've been working on this shift for about 9 months (though it's really only taken me a couple of weeks to do), but JavaSim/C++SIM now have a new home in Codehaus. The move isn't quite complete (I still need to find the source for the docs), but it's a start.

JBossWorld recap

It's been a couple of weeks since I got back from JBossWorld Orlando. Enough time to blog, but not enough spare time to blog! So while waiting for the family to get ready so we can go to a three year old's birthday party (Hmmm, screaming kids ... fun!) I decided to grab some time and give a recap.

I've been to every JBossWorld bar the first one and I have to say that this one was the best (with the exception of the JBoss party, which was not a JBossWorld party at all - maybe a Red Hat party in disguise?) There were more people at the event and this was obvious in the sessions: every one I went to was packed, some with people sitting on the floors in the aisles! The quality of the sessions was also really good too.

Maybe it has something to do with the fact we missed a JBW last year and people were relieved to see that they are back, or maybe it was the fact we've made a lot of improvements to the technologies and processes over the past year or so. I don't have the answer, but I do know that the whole event was buzzing. When I go to conferences or workshops I usually find time to do some work (e.g., catching up on things I haven't had time to do over the previous weeks or months). Not this time: if I wasn't presenting or listening to presentations, I was talking to users, customers or friends/colleagues.

I think one of the highlights for me was my presentation on JBoss Transactions. I've done presentations on JBossTS for so long (going back decades if you count Arjuna), that I can usually predict the audience: a select number of die-hard transaction users who already "get it" and want to talk shop. Not this time. The room was packed (with people standing and sitting on the floor). Even more so than the presentation on JBossESB! So much so I had to ask the audience if they were all in the right room! Everyone stayed until the end (always a good sign) and there were lots of good questions and in depth discussions.

We made a lot of interesting announcements during the event and I got pulled into a few press and analyst meetings. I know that all of the JBoss/Red Hat folks were happy the event took place, but so were the people from outside the company. That definitely is the highlight for me. And of course it was good to see Marc there too. It wouldn't really be a JBossWorld without him.

Wednesday, March 05, 2008

Vista Woes

So far I've managed to avoid having to use Windows Vista. But I've heard the rumours of problems over the past 12 months. Given the hype that has surrounded Vista for the past few years, it's really disappointing to hear. But until now it was all hearsay. But we bought my son a new laptop recently and it came with Vista pre-installed. He's been using a 5 year old PIII running XP and now has a Dual Core 2Gig running Vista.

My initial impressions of Vista were that it looked good and felt fresh. But within an hour of using it both he and I were frustrated by the interface (WTF were they thinking of when they developed this?) and the speed: it's really slow! Now I know the machine itself is fast because we're running XP and Linux on the exact same configuration. So this sluggishness is purely down to the OS. After 2 months of trying to put up with it, I have to say that everything bad I've heard about Vista seems to be born out. I'm probably going to persevere with it for a while longer just in case MSFT get their act together, but I can see us nuking Vista and going to XP soon if things don't improve.

Thursday, February 21, 2008

Are IONA's days numbered?

I've been involved with IONA in one way or another for the best part of 20 years, so although this announcement is no surprise, it's still a sad day.

Thursday, January 24, 2008

DOA 2008

When I was asked to give a keynote at DOA 2007 I was also asked to be a co-chair on DOA 2008. We've been working to get the PC finalised and I'm happy to say that I persuaded my long time friend Greg to join me as a co-chair. Here's the CFP and I hope to see you in Monterrey!


The 10th International Symposium on

Distributed Objects, Middleware, and Applications (DOA'08)

Monterrey, Mexico, Nov 10 - 12, 2008

Many of the world's most important and critical software systems are based on distributed object and middleware technologies. Middleware is software that resides between the applications and the underlying operating systems on every node of a distributed computing system. It provides the "glue" that connects distributed objects and applications and is at the heart of component-based systems, service-oriented architectures, agent-based systems, or peer-to-peer infrastructures.

Distribution technologies have reached a high level of maturity. Classical distributed object middleware (e.g., CORBA, .NET and Java-based technologies) and message-oriented middleware (e.g., publish/subscribe systems) have been widely successful. We are now witnessing a shift to coarser-grained component-based and service-oriented architectures (e.g., Web services). Middleware for mobile applications and peer-to-peer systems (e.g., JXTA) is also gaining increasing popularity, as it allows bridging users without reliance on centralized resources.

Common to all these approaches are goals such as openness, reliability, scalability, awareness, distribution transparency, security, ease of development, or support for heterogeneity between applications and platforms. Also, of utmost importance today is the ability to integrate distributed services and applications with other technologies such as the Web, multimedia systems, databases, peer-to-peer systems, or Grids. Along with the rapid evolution of these fields, continuous research and development is required in distributed technologies to advance the state of the art and broaden the scope of their applicability

Two Dimensions: Research & Practice

Research in distributed objects, components, services, and middleware establishes new principles that open the way to solutions that can meet the requirements of tomorrow's applications. Conversely, practical experience in real-world projects drives this same research by exposing new ideas and unveiling new types of problems to be solved. DOA explicitly intends to provide a forum to help trigger and foster this mutual interaction. Submissions are therefore welcomed along both these dimensions: research (fundamentals, concepts, principles, evaluations, patterns, and algorithms) and practice (applications, experience, case studies, and lessons). Contributions attempting to bridge the gap between these two dimensions are particularly encouraged. As we are fully aware of the differences between academic and industrial research and development, submissions will be treated accordingly and judged by a peer review not only for scientific rigor (in the case of "academic research" papers), but also for originality and relevance (in the case of "case study" papers).

About DOA

DOA 2008 is part of a joint event on the theme "meaningful Internet systems and ubiquitous computing". This federated event co-locates five related and complementary conferences in the areas of networked information systems, covering key issues in distributed infrastructures and enabling technologies (DOA), data and Web semantics (ODBASE), cooperative information systems (CoopIS), Grid computing (GADA) and Information Security (ISS). More details about this federated event can be found at .


The topics of this symposium include, but are not limited to:

* Application case studies of distribution technologies
* Aspect-oriented approaches for distributed middleware
* Component-based distributed systems
* Content distribution and multimedia streaming
* Dependency injection
* Development methodologies for distributed applications
* Distributed algorithms and communication protocols
* Distributed business objects and components
* Distributed databases and transactional systems
* Distributed infrastructures for cluster and Grid computing
* Distributed middleware for embedded systems and sensor networks
* Formal methods and tools for designing, verifying, and evaluating distributed middleware
* Interoperability with other technologies
* Microcontainers
* Middleware for mobile and ad-hoc networks
* Migration of legacy applications to distributed architectures
* Novel paradigms to support distribution
* Object-based, component-based, and service-oriented middleware
* Peer-to-peer and decentralized infrastructures
* Performance analysis of distributed computing systems
* Publish/subscribe, event-based, and message-oriented middleware
* Reliability, fault tolerance, quality-of-service, and real time support
* Scalability and adaptivity of distributed architectures
* Self-* properties in distributed middleware
* Service-oriented architectures
* Software engineering for distributed middleware systems
* Testing and validation of distributed infrastructures
* Ubiquitous and pervasive computing
* Web services


Abstract Submission Deadline June 8, 2008
Paper Submission Deadline June 15, 2008
Acceptance Notification August 10, 2008
Camera Ready Due August 25, 2008
Registration Due August 25, 2008
OTM Conferences November 9 - 14, 2008


Papers submitted to DOA'08 must not have been accepted for publication elsewhere or be under review for another workshop or conference.

All submitted papers will be carefully evaluated based on originality, significance, technical soundness, and clarity of expression. All papers will be refereed by at least three members of the program committee, and at least two will be experts from industry in the case of practice reports. All submissions must be in English.

Submissions must not exceed 18 pages in the final camera-ready paper style.

The paper submission site will be announced later
Failure to comply with the formatting instructions for submitted papers will lead to the outright rejection of the paper without review.

Failure to commit to presentation at the conference automatically excludes a paper from the proceedings.


OTM'08 General Co-Chairs

* Robert Meersman, VU Brussels, Belgium
* Zahir Tari, RMIT University, Australia

DOA'08 Program Committee Co-Chairs

* Mark Little, Red Hat, UK
* Alberto Montressor, University of Trento, Italy
* Greg Pavlik, Oracle, USA

Program Committee Members

* Santosh Shrivastava, University of Newcastle upon Tyne
* Nick Kavantzas, Oracle, USA
* Stuart Wheater, Arjuna Technologies
* Aniruddha S. Gokhale, Vanderbilt University
* Michel Riveill, Université de Nice, Sophia Antipolis – France
* Gero Mühl, Berlin University of Technology, Germany
* Fernando Pedone, University of Lugano, Switzerland
* Graham Morgan, Newcastle University, UK
* Barret Bryant, University of Alabama at Birmingham, USA
* Michael Stal, Siemens, Germany
* Jose Orlando Pereira, University of Minho
* Luis Rodrigues, INESC-ID/IST
* Francois Pacull, Xerox Research Centre Europe
* Aad van Moorsel, University of Newcastle, UK
* Gordon Blair, Lancaster University, UK
* Pascal Felber, Université de Neuchâtel, Switzerland
* Joe Loyall, BBN Technologies, USA
* Mark Baker, Coactus Consulting, Canada
* Rui Oliveira, University of Minho, Portugal
* Harold Carr, Sun, USA
* Fabio Kon, University of São Paulo, Brazil
* Judith Bishop, University of Pretoria, SOUTH AFRICA
* Arno Puder, San Francisco State University, USA
* Shalini Yajnik, Avaya Labs, USA
* Benoit Garbinato, University of Lausanne, Switzerland
* Calton Pu, Georgia Tech, USA
* Geoff Coulson, Lancaster University, UK
* Hong Va Leong, Hong Kong Polytechnic University, Hong Kong
* Nikola Milanovic, Technical University Berlin
* Jean-Bernard Stefani, INRIA, France
* Andrew Watson, OMG, USA
* Gregory Chockler, IBM Haifa Labs, Israel
* Gian Pietro Picco, University of Trento, Italy
* Patrick Eugster, Purdue University, USA
* Eric Jul, University of Copenhagen, Denmark
* Jeff Gray, University of Alabama at Birmingham, USA
* Medhi Jazayeri, University of Lugano, Switzerland
* Richard Solely, OMG, USA

TIP is deprecated

'Nuff said.

Thursday, January 17, 2008

QCon London

Stefan invited me to present at QCon London this year. I'm looking forward to it, and particularly to catching up with Steve (who owes me a few pints!) and Jim.

Wednesday, January 16, 2008

Extreme Transaction Processing

I've been meaning to write something about Extreme Transaction Processing (XTP) for a long time, ever since I first read the Gartner report a couple of years ago (I think!). I read it again recently just to refresh my memory and to make sure I hadn't missed something. I hadn't. I'm disappointed. This is hardly extreme ("utmost or exceedingly great in degree") if you've been tracking transaction processing for the past decade or so. Maybe moderate ("of medium quantity, extent, or amount") at best.

So far this seems like another example of hype over substance, which is a shame because I believe there is a need for something truly extreme and paradigm shifting. Needless to say we'll have something to say on that subject later and it will certainly cover the scenarios current XTP users seem to want. However, given the background you can definitely expect much more. Maybe we can call it Beyond Extreme Transaction Processing?

Saturday, January 12, 2008

Adobe and SOA

Not many people associate SOA with Adobe, with they should. For a start, my friend Duane was chair of OASIS SOA-RM (still the only standard for defining what SOA is and is not). Then they write interesting papers on the subject such as this one. Well worth a read.