Tuesday, June 22, 2010

MW4SOC CFP

CALL FOR PAPERS
===============

+--------------------------------------------------------+
| 5th Middleware for Service-Oriented Computing (MW4SOC) |
| Workshop at the ACM/IFIP/USENIX Middleware Conference |
+--------------------------------------------------------+

Nov 29 Dec 3, 2010
Bangalore, India
http://www.dedisys.org/mw4soc10/

This workshop has its own ISBN and will be included in the ACM digital library.

Important Dates
===============
Paper submission: August 1, 2010
Author notification: September 15, 2010
Camera-ready copies: October 1, 2010
Workshop date: November 30, 2010

Call details
============
The initial visionary promise of Service Oriented Computing (SOC) was a world of cooperating services being loosely coupled to flexibly create dynamic business processes and agile applications that may span organisations and heterogeneous computing platforms but can nevertheless adapt quickly and autonomously to changes of requirements or context. Today, the influence of SOC goes far beyond the initial concepts of the original disciplines that spawned it. Many would argue that areas like business process modelling and management, Web2.0-style applications, data as a service, and even cloud computing emerge mainly due to the shift in paradigm towards SOC. Nevertheless, there is still a strong need to merge technology with an understanding of business processes and organizational structures.

While the immediate need of middleware support for SOC is evident, current approaches and solutions still fall short by primarily providing support for only the intra-enterprise aspect of SOC and do not sufficiently address issues such as service discovery, re-use, re-purpose, composition and aggregation support, service management, monitoring, and deployment and maintenance of large-scale heterogeneous infrastructures and applications. Moreover, quality properties (in particular dependability and security) need to be addressed not only by interfacing and communication standards, but also in terms of actual architectures, mechanisms, protocols, and algorithms. Challenges are the administrative heterogeneity, the loose coupling between coarse-grained operations and long-running interactions, high dynamicity, and the required flexibility during run-time. Recently, massive-scale and mobility were added to the challenges for Middleware for SOC.

These considerations also lead to the question to what extent service-orientation at the middleware layer itself is beneficial (or not). Recently emerging "Infrastructure as a Service" and "Platform as a Service" offerings, from providers like Amazon, Google, IBM, Microsoft, or from the open source community, support this trend towards cloud computing which provides corresponding services that can be purchased and consumed over the Internet. However, providing end-to-end properties and addressing cross-cutting concerns like dependability, security, and performance in cross-organizational SOC is a particular challenge and the limits and benefits thereof have still to be investigated.

The workshop consequently welcomes contributions on how specifically service oriented middleware can address the above challenges, to what extent it has to be service oriented by itself, and in particular how quality properties are supported.

Topics of interest
==================
* Architectures and platforms for Middleware for SOC.
* Core Middleware support for deployment, composition, and interaction.
* Integration of SLA (service level agreement) and/or technical policy support through middleware.
* Middleware support for service management, maintenance, monitoring, and control.
* Middleware support for integration of business functions and organizational structures into Service oriented Systems (SOS).
* Evaluation and experience reports of middleware for SOC and service oriented middleware.

Workshop co-chairs
===============
Karl M. Göschka (chair)
Schahram Dustdar
Frank Leymann
Helen Paik

Organizational chair
====================
Lorenz Froihofer, mw4soc@dedisys.org

Program committee
=================
Paul Brebner, NICTA (Australia)
Gianpaolo Cugola, Politecnico di Milano (Italy)
Walid Gaaloul, Institut Telecom (France)
Harald C. Gall, Universität Zürich (Switzerland)
Nikolaos Georgantas, INRIA (France)
Chirine Ghedira, Univ. of Lyon I (France)
Svein Hallsteinsen, SINTEF (Norway)
Yanbo Han, ICT Chinese Academy of Sciences (China)
Valérie Issarny, INRIA (France)
Mehdi Jazayeri, Università della Svizzera Italiana (Switzerland)
Bernd Krämer, University of Hagen (Germany)
Mark Little, JBoss (USA)
Heiko Ludwig, IBM Research (USA)
Hamid Reza Motahari Nezhad, HP Labs (USA)
Nanjangud C. Narendra, IBM Research (India)
Rui Oliveira, Universidade do Minho (Portugal)
Cesare Pautasso, Università della Svizzera Italiana (Switzerland)
Fernando Pedone, Università della Svizzera Italiana (Switzerland)
Jose Pereira, Universidade do Minho (Portugal)
Florian Rosenberg, Vienna University of Technology (Austria)
Giovanni Russello, Create-Net (Italy)
Regis Saint-Paul, CREATE-NET (Italy)
Dietmar Schreiner, Vienna University of Technology (Austria)
Bruno Schulze, National Lab for Scientific Computing (Brazil)
Francois Taiani, Lancaster University (UK)
Aad van Moorsel, University of Newcastle (UK)
Roman Vitenberg, University of Oslo (Norway)
Michael Zapf, Universität Kassel (Germany)
Liming Zhu, NICTA (Australia)

Saturday, June 19, 2010

JBossWorld and JUDCon

Off to Boston tomorrow for JBossWorld and JUDCon. The former is always a good event, but it's really the latter that I'm looking forward to the most, since it's the first ever one and it's taken us a while to organize. I'm already working on the European version that'll be coming in a few months time so hope to get some constructive feedback from the people to attend in the coming few days. And for my long flight to Boston I'm having a complete break and finishing reviewing someone's PhD thesis.

Tuesday, June 15, 2010

Classic paper

I came across this paper in my pile of printed materials from when I was doing my PhD. Great paper at the time and still applicable today. Well worth a read.

One of the best books I've ever read ...

I didn't realise that To Kill A Mockingbird was almost 50 years old! It's definitely one of the best books I read (while a teenager at school) and I go back to it every few years. It surprised me how much I loved the book from the start given that at the time sci fi and fantasy were the mainstays of my library. But from the age of 13 or 14 to the present day it's definitely something that I recall in vivid detail. And of course Gregory Peck makes a great (if not quite old enough) Atticus Finch. If you haven't read it then you should definitely do so!

Monday, June 14, 2010

When did we decide lock-in was good?

Many years ago the number of standards bodies around could be counted on the fingers of one hand. Back then, vendor lock-in was a reality and most of the time a certainty. But our industry matured and customers realised that sometimes being tied to a single vendor wasn't always a good thing. Over the next 20 years or so standards bodies have sprung up to cover almost every aspect of software. It's arguable that we now have too many standards bodies! Plus standards only work when they are based on experience and need: rushing to a standard too early in the cycle can result in something that isn't useful and may inhibit the real standard when it's needed eventually.

But I think, or at least thought, that most people, including developers and end-users, understand why standards are important. The move towards DCE, CORBA and J2EE illustrated this. Yes sometimes these standards weren't necessarily the easiest to use, but good standards should evolve to take this sort of thing into account, e.g., the differences between EE6 and J2EE 1.0 are significant in a number of areas not least of which is usability.

Furthermore a standard needs to be backed by multiple independent vendors and ideally come from an independent standards body (OK Java doesn't fit this last point, but maybe that will change some day.) So it annoys me at times when I hear the term open standard used to refer to something that may be used by many people but still only comes from a single vendor, or perhaps a couple, but certainly doesn't fit any of the generally accepted meanings of the term.

And yes this rant has been triggered by some of the recent announcements around Cloud. Maybe it's simply due to where we are in the hype curve, because I can't believe that developers and users are going to throw away the maturity of thought and process that have been built up over the past few decades concerning standards (interoperability, portability etc.) Or have the wool pulled over their eyes. Of course we need to experiment and do research, but let's not ignore the fact that there are real standards out there that either are applicable today or could be evolved to applicability by a concerted effort by many collaborating vendors and communities. You don't want to deploy your applications into a Cloud only to find that you can't get them back or can't integrate them with a partner or customer who may have chosen a different Cloud offering. That's not a Cloud, that's a prison.

Tuesday, June 01, 2010

Reading and not understanding

I can't remember when I first read The Innovator's Dilemma, but it was only a few years back when I first read The Tipping Point. Both are good books and compliment each other. However, some interesting conversations over the long weekend here made it clear to me that some people who say they've read them either haven't or have failed to understand them. Now it's one thing when friends fall into this category, but it's a completely different thing when it's business leaders quoting them as gospel to back up dubious choices. No individual's names and no company names, but it turns out it's quite common in our industry. Probably elsewhere too. Maybe there are some texts (books, papers etc.) where you should be forced to pass a test before being allowed to refer to them.

Thursday, May 27, 2010

TweetDeck

Many of the folks at the Thinking Digital conference I'm attending are using Macs and running TweetDeck. So while sat here listening to yet another great talk, I decided to install it. Wow! I've been dabbling with twitter for a while but now I can see that others have been tweeting at me and I've not seen them until now. Apologies to everyone to which that applies! Now I may start using twitter more.

My next phone?

Last October I decided to upgrade my phone. At the time the iPhone wasn't available through my provider, so I went with the HTC Hero, running Android. It got good write-ups and I've had an interest in Android for a while. Since getting it I've enjoyed it, but it's not a match for the iPhone. The hardware is too slow, it's not as responsive, the store isn't as good, you can't (couldn't) store apps on the SD card, and there are a few other minor issues.

But I like the phone, so am glad I got it. That is until recently. My phone is running Android 1.5, and over the past 6 months we've seen 2.1 and now 2.2 come out. Now of course I could go and install these on my phone, but the "preferred" route is to get the official release from HTC, which would include their SenseUI, which is a great selling point over other Android phones and the iPhone. Yet to date there is still no official release from HTC for 2.1 and there may never be a 2.2 release for it.

Now I understand that technology can get dated and old things can't always run the new stuff. But just over 6 months after I got it I refuse to believe that my phone is outdated! Maybe this is an HTC specific issue and they really just need to get their act together. But it appears that Google would disagree and that fragmentation is inevitable. If it is then I am concerned for the future of Android.

For now I'll wait to see what comes from HTC. But when my contracts up I will likely move to an iPhone and mark my journey with Android down to experience.

Tuesday, May 25, 2010

Paul on PaaS

It's been a while since Paul Fremantle and I caught up, but it's good to see that we're still thinking along similar lines on a number of topics. This time it's PaaS and vendor lock-in. In fact there's a growing concern that this is something that may not be obvious to users given the amount of hype that's builing in the atomosphere.

Thursday, May 20, 2010

Interesting Cloud announcement

It's another week and we have another announcement around Cloud, this time from Google. Quite interesting but the underlying message is pretty much what I said in an earlier entry: if you want to move to this type of cloud then you'd better be expecting to re-code some or all of your application. Oh joy. Here we go again! Even the tag line of "... write once, deploy anywhere" is a lot of smoke-and-mirrors. "Deploy to any one of four vendor specific Clouds, but don't forget about the lock-in potential there" would be more appropriate.

Come on guys. We really cannot afford to reinvent the world again. So as a prospective user, unless you really aren't interested in leveraging your existing investments in software and people, this looks like another non-starter. Maybe 2 out of 10 for marketing buzz, but 0 for effort.

Sunday, May 16, 2010

Duane tells it like it is

I've known Duane "Chaos" Nickull for many years. He's a great guy, a good friend and knows his stuff on a range of technical and not so technical (excellent musician, for instance). His most recent post made me laugh out loud! A good entry to say the least.

Friday, May 07, 2010

Rich Frisbie

I met some really great guys when Bluestone acquired Arjuna Solutions. It was a great culture and family, one with which I'm pleased to have been involved. It's therefore very sad to hear that one of the friends I made during those years, Rich Frisbie, has passed on so suddenly and at such a young age. I wish his family all the best and will have a silent toast to Rich. 'nuff said.

Thursday, May 06, 2010

The Chewbacca Defense

I use it all the time when I run into things that just don't make sense. It's a light-hearted way of telling people they need to think more!

Tuesday, May 04, 2010

Monday, May 03, 2010

Plan 9 operating system

While reading about cult movies I was suddenly reminded about the Plan 9 operating system. It came out while we were deep into the original Arjuna development and a heavy user of Sun Sparcs, so naturally it was something we had to investigate (along with the Spring operating system)! It had some interesting ideas, and with distribution at its heart it was very relevant to what we were researching (the University was the home of the Newcastle Connection, with some similar aims from many years before.)

Anyway, it was an interesting time and I'm pleasantly surprised to learn that there's still work going on into Plan 9, twenty something years after it began. Very cool!

Friday, April 30, 2010

Cloudy days ahead for applications?

I've been thinking a lot recently about the Cloud and its potential as a disruptive technology. I got to wondering about how we arrived at where we are today, and one of my favourite books sprang to mind as a way of articulating that, at least to myself.

To misuse HG Wells ever so slightly, "No one would have believed in the last years of the first decade of the twenty first century that the world of enterprise software was being watched keenly and closely by intelligences greater than man's and yet as mortal as his own; that as men busied themselves about their various concerns they were scrutinised and studied, perhaps almost as narrowly as a man with a microscope might scrutinise the transient creatures that swarm and multiply in a drop of water. With infinite complacency men went to and fro over this globe about their little affairs, serene in their assurance of their empire over middleware. It is possible that the infusoria under the microscope do the same. No one gave a thought to some of the relatively new companies as sources of danger to their empires, or thought of them only to dismiss the idea that they could have a significant impact on the way in which applications could be developed and deployed. Yet across the gulf of cyberspace, minds that are to our minds as ours are to those of the beasts that perish, intellects vast and cool and unsympathetic, regarded enterprise applications with envious eyes, and slowly and surely drew their plans against us."

Of course the rise of Cloud practitioners and vendors was in no way as malevolent as the Martians, but the potential impact on the middleware industry may be no less dramatic (or drastic). Plus there was a level of ignorance (arrogance) on behalf of some middleware vendors against the likes of Amazon and Google just as the Victorians believed themselves unassailable and masters of all they surveyed.

But there are still a number of uncertainties around where this new wave is heading. One of them is exactly what does this mean for applications? On the one hand there are those who believe applications and their supporting infrastructure (middleware) must be rewritten from scratch. Then there are others who believe existing applications must be supported. I've said before that I believe as an industry we need to be leveraging what we've been developing for the past few decades. Of course some things need to change and evolve, but if you look at what most people who are using or considering using Cloud expect, it's to be able to take their existing investments and Cloudify them.

This shouldn't come as a surprise. If you look at what happened with the CORBA-to-J2EE transition, or the original reason for the development of Web Services, or even how a lot of the Web works, they're all examples of reusing existing investments to one degree or another. Of course over the years the new (e.g., J2EE) morphed away from the old, presenting other ways in which to develop applications to take advantage of the new and different capabilities those platforms offered. And that will happen with Cloud too, as it evolves over the next few years. But initially, if we're to believe that there are economic benefits to using the Cloud, then they have to support existing applications (of which there are countless), frameworks (of which there are many) and skill sets of the individuals who architect them, implement them and manage them (countless again). It's outsourcing after all.

Thursday, April 29, 2010

SOA metrics post

I wrote this a while ago and someone forgot to let me know it had finally been published. Better late than never I suppose.

Wednesday, April 28, 2010

A Platform of Services

Back in the 70's and 80's, when multi-threaded processes and languages were still very much on the research agenda, distributed systems were developed based on a services architecture, with services as the logical unit of deployment, replication and fault containment. If you wanted to service multiple clients concurrently then you'd typically fire off one server instance per client. Of course your servers could share information between themselves if necessary to give the impression of a single instance, using sockets, shared memory, disk etc.

Now although those architectures were almost always forced on us by limitations in the operating systems, languages and hardware at the time, it also made a lot of sense to have the server as the unit of deployment, particularly from the perspectives of security and fault tolerance. So even when operating systems and languages started to support multi-threading, and we started to look at standardising distributed systems with ANSA, DCE and CORBA, the service remained a core part of the architecture. For example, CORBA has the concept of Core Services, such as persistence, transactions, security and although the architecture allows them to be colocated in the same process as each other or the application for performance reasons, many implementations continued to support them as individual services/processes.

Yes there are trade-offs to be made between, say, performance and fault tolerance. Tolerating the crash of a thread within a multi-threaded process is often far more difficult than tolerating the crash of an individual process. In fact a rogue thread could bring down other threads or prevent them from making forward progress, preventing the process (server) from continuing to act on behalf of multiple clients. However, invoking services as local instances (e.g., objects) in the same process is a lot quicker than if you have to resort to message passing, whether or not based on RPC.

However, over the past decade or so as threading became a standard part of programming languages and processor performance increased more rapidly than network speeds, many distributed systems implementations moved to colocating services as the default, with the distributed aspect really only applying to the interactions between the business client and the business service. In some cases this was the only way in which the implementation worked, i.e., putting core infrastructure services in separate processes and optionally on different machines was simply no longer an option.

Of course the trade-offs I mentioned kicked in and were made for you (enforced on you) by the designers, often resulting in monolithic implementations. With the coining of the SOA term and the rediscovery of services and loose coupling, many started to see services as beneficial to an architect despite any initial issues with performance. As I've said before, SOA implementations based on CORBA have existed for many years, although of course you need more than just services to have SOA.

Some distributed systems implementations that embraced SOA started to move back to a service-oriented architecture internally too. Others were headed in that direction anyway. Still others stayed where they were in their colocated, monolithic worlds. And then came Cloud. I'm hoping that as an industry we can leverage most of what we've been implementing over the past few decades, but what architecture is most conducive to being able to take advantage of the benefits Cloud may bring? I don't think we're quite there yet to be able to answer that question in its entirety, but I do believe that it will be based on an architecture that utilises services. So if we're discussing frameworks for developing and deploying applications to the cloud we need to be thinking that those core capabilities needed by the application (transactions, security, naming etc.) will be remote, i.e., services, and they may even be located on completely different cloud infrastructures.

Now this may be obvious to some, particularly given discussions around PaaS and SaaS, but I'm not so sure everyone agrees, given what I've seen, heard and read over the past months. What I'm particularly after is a services architecture that CORBA espoused but which many people overlooked or didn't realise was possible, particularly if they spoke with the large CORBA vendors at the time: an architecture where the services could be from heterogeneous vendors as well as being based on different implementation languages. This is something that will be critical for the cloud, as vendors and providers come and go, and applications need to choose the right service implementation dynamically. The monolithic approach won't work here, particularly if those services may need to reside on completely different cloud infrastructures (cf CORBA ORB). I'm hoping we don't need to spend a few years trying to shoehorn monoliths in to this only to have that Eureka moment!

The lack of standards in the area will likely impact interoperability and portability in the short term, but once standards do evolve those issues should be alleviated somewhat. The increasing use of REST should help immediately too though.

Tuesday, April 20, 2010

Some things really shouldn't be changed

There are some things that shouldn't change, no matter how good an idea it may seem at first glance. For instance, the original Coke formula, the name of Coco Pops, and remaking the Searchers. Then again there are some things that really do benefit from a revision, such as Battlestar Galactica or the laws of gravity.

So it was with some trepidation that I heard they were going to remake The Prisoner. The original is a classic of 1960's TV that stood the test of time. I remember watching it at every opportunity while growing up (in the days when we only had 4 TV channels, so repeats were few and far between). Patrick McGoohan was The Prisoner and while the stories were often "out there", the series had this pulling power that made it unmissable.

I wondered how anyone could remake it and capture the essence of the original show? But I decided to be open minded and sat down the other night to watch the first episode of the new series. Afterwards the first thought I had was "I'll never get that hour back again!" As a remake, it was terrible. As a stand-alone series, it was probably passable.

It looks like another good idea bites the dust. I'll be taking the series off my Sky+ reminder now and if I'm stuck for something to do at that time I'll either watch some wood warp or maybe watch some paint dry: both would be far more stimulating activities!

Monday, April 19, 2010

Volcano activity spoils conference

I was an invited speaker as the first DoD sponsored SOA Symposium last year and really enjoyed it. I got invited to this year's event and was going to speak on SOA and REST. I think it would have been as good as last year (will be as good as last year), but unfortunately a certain volcano in Iceland has meant that my flights have been canceled. So I'll have to watch from afar and hope that the troubles in the sky clear up soon. My best wishes go to the event organizers and I hope that next year presents an opportunity to attend for a second time.