I've been thinking a lot recently about the Cloud and its potential as a disruptive technology. I got to wondering about how we arrived at where we are today, and one of my favourite books sprang to mind as a way of articulating that, at least to myself.
To misuse HG Wells ever so slightly, "No one would have believed in the last years of the first decade of the twenty first century that the world of enterprise software was being watched keenly and closely by intelligences greater than man's and yet as mortal as his own; that as men busied themselves about their various concerns they were scrutinised and studied, perhaps almost as narrowly as a man with a microscope might scrutinise the transient creatures that swarm and multiply in a drop of water. With infinite complacency men went to and fro over this globe about their little affairs, serene in their assurance of their empire over middleware. It is possible that the infusoria under the microscope do the same. No one gave a thought to some of the relatively new companies as sources of danger to their empires, or thought of them only to dismiss the idea that they could have a significant impact on the way in which applications could be developed and deployed. Yet across the gulf of cyberspace, minds that are to our minds as ours are to those of the beasts that perish, intellects vast and cool and unsympathetic, regarded enterprise applications with envious eyes, and slowly and surely drew their plans against us."
Of course the rise of Cloud practitioners and vendors was in no way as malevolent as the Martians, but the potential impact on the middleware industry may be no less dramatic (or drastic). Plus there was a level of ignorance (arrogance) on behalf of some middleware vendors against the likes of Amazon and Google just as the Victorians believed themselves unassailable and masters of all they surveyed.
But there are still a number of uncertainties around where this new wave is heading. One of them is exactly what does this mean for applications? On the one hand there are those who believe applications and their supporting infrastructure (middleware) must be rewritten from scratch. Then there are others who believe existing applications must be supported. I've said before that I believe as an industry we need to be leveraging what we've been developing for the past few decades. Of course some things need to change and evolve, but if you look at what most people who are using or considering using Cloud expect, it's to be able to take their existing investments and Cloudify them.
This shouldn't come as a surprise. If you look at what happened with the CORBA-to-J2EE transition, or the original reason for the development of Web Services, or even how a lot of the Web works, they're all examples of reusing existing investments to one degree or another. Of course over the years the new (e.g., J2EE) morphed away from the old, presenting other ways in which to develop applications to take advantage of the new and different capabilities those platforms offered. And that will happen with Cloud too, as it evolves over the next few years. But initially, if we're to believe that there are economic benefits to using the Cloud, then they have to support existing applications (of which there are countless), frameworks (of which there are many) and skill sets of the individuals who architect them, implement them and manage them (countless again). It's outsourcing after all.
I work for Red Hat, where I lead JBoss technical direction and research/development. Prior to this I was SOA Technical Development Manager and Director of Standards. I was Chief Architect and co-founder at Arjuna Technologies, an HP spin-off (where I was a Distinguished Engineer). I've been working in the area of reliable distributed systems since the mid-80's. My PhD was on fault-tolerant distributed systems, replication and transactions. I'm also a Professor at Newcastle University and Lyon.
Friday, April 30, 2010
Thursday, April 29, 2010
SOA metrics post
I wrote this a while ago and someone forgot to let me know it had finally been published. Better late than never I suppose.
Wednesday, April 28, 2010
A Platform of Services
Back in the 70's and 80's, when multi-threaded processes and languages were still very much on the research agenda, distributed systems were developed based on a services architecture, with services as the logical unit of deployment, replication and fault containment. If you wanted to service multiple clients concurrently then you'd typically fire off one server instance per client. Of course your servers could share information between themselves if necessary to give the impression of a single instance, using sockets, shared memory, disk etc.
Now although those architectures were almost always forced on us by limitations in the operating systems, languages and hardware at the time, it also made a lot of sense to have the server as the unit of deployment, particularly from the perspectives of security and fault tolerance. So even when operating systems and languages started to support multi-threading, and we started to look at standardising distributed systems with ANSA, DCE and CORBA, the service remained a core part of the architecture. For example, CORBA has the concept of Core Services, such as persistence, transactions, security and although the architecture allows them to be colocated in the same process as each other or the application for performance reasons, many implementations continued to support them as individual services/processes.
Yes there are trade-offs to be made between, say, performance and fault tolerance. Tolerating the crash of a thread within a multi-threaded process is often far more difficult than tolerating the crash of an individual process. In fact a rogue thread could bring down other threads or prevent them from making forward progress, preventing the process (server) from continuing to act on behalf of multiple clients. However, invoking services as local instances (e.g., objects) in the same process is a lot quicker than if you have to resort to message passing, whether or not based on RPC.
However, over the past decade or so as threading became a standard part of programming languages and processor performance increased more rapidly than network speeds, many distributed systems implementations moved to colocating services as the default, with the distributed aspect really only applying to the interactions between the business client and the business service. In some cases this was the only way in which the implementation worked, i.e., putting core infrastructure services in separate processes and optionally on different machines was simply no longer an option.
Of course the trade-offs I mentioned kicked in and were made for you (enforced on you) by the designers, often resulting in monolithic implementations. With the coining of the SOA term and the rediscovery of services and loose coupling, many started to see services as beneficial to an architect despite any initial issues with performance. As I've said before, SOA implementations based on CORBA have existed for many years, although of course you need more than just services to have SOA.
Some distributed systems implementations that embraced SOA started to move back to a service-oriented architecture internally too. Others were headed in that direction anyway. Still others stayed where they were in their colocated, monolithic worlds. And then came Cloud. I'm hoping that as an industry we can leverage most of what we've been implementing over the past few decades, but what architecture is most conducive to being able to take advantage of the benefits Cloud may bring? I don't think we're quite there yet to be able to answer that question in its entirety, but I do believe that it will be based on an architecture that utilises services. So if we're discussing frameworks for developing and deploying applications to the cloud we need to be thinking that those core capabilities needed by the application (transactions, security, naming etc.) will be remote, i.e., services, and they may even be located on completely different cloud infrastructures.
Now this may be obvious to some, particularly given discussions around PaaS and SaaS, but I'm not so sure everyone agrees, given what I've seen, heard and read over the past months. What I'm particularly after is a services architecture that CORBA espoused but which many people overlooked or didn't realise was possible, particularly if they spoke with the large CORBA vendors at the time: an architecture where the services could be from heterogeneous vendors as well as being based on different implementation languages. This is something that will be critical for the cloud, as vendors and providers come and go, and applications need to choose the right service implementation dynamically. The monolithic approach won't work here, particularly if those services may need to reside on completely different cloud infrastructures (cf CORBA ORB). I'm hoping we don't need to spend a few years trying to shoehorn monoliths in to this only to have that Eureka moment!
The lack of standards in the area will likely impact interoperability and portability in the short term, but once standards do evolve those issues should be alleviated somewhat. The increasing use of REST should help immediately too though.
Now although those architectures were almost always forced on us by limitations in the operating systems, languages and hardware at the time, it also made a lot of sense to have the server as the unit of deployment, particularly from the perspectives of security and fault tolerance. So even when operating systems and languages started to support multi-threading, and we started to look at standardising distributed systems with ANSA, DCE and CORBA, the service remained a core part of the architecture. For example, CORBA has the concept of Core Services, such as persistence, transactions, security and although the architecture allows them to be colocated in the same process as each other or the application for performance reasons, many implementations continued to support them as individual services/processes.
Yes there are trade-offs to be made between, say, performance and fault tolerance. Tolerating the crash of a thread within a multi-threaded process is often far more difficult than tolerating the crash of an individual process. In fact a rogue thread could bring down other threads or prevent them from making forward progress, preventing the process (server) from continuing to act on behalf of multiple clients. However, invoking services as local instances (e.g., objects) in the same process is a lot quicker than if you have to resort to message passing, whether or not based on RPC.
However, over the past decade or so as threading became a standard part of programming languages and processor performance increased more rapidly than network speeds, many distributed systems implementations moved to colocating services as the default, with the distributed aspect really only applying to the interactions between the business client and the business service. In some cases this was the only way in which the implementation worked, i.e., putting core infrastructure services in separate processes and optionally on different machines was simply no longer an option.
Of course the trade-offs I mentioned kicked in and were made for you (enforced on you) by the designers, often resulting in monolithic implementations. With the coining of the SOA term and the rediscovery of services and loose coupling, many started to see services as beneficial to an architect despite any initial issues with performance. As I've said before, SOA implementations based on CORBA have existed for many years, although of course you need more than just services to have SOA.
Some distributed systems implementations that embraced SOA started to move back to a service-oriented architecture internally too. Others were headed in that direction anyway. Still others stayed where they were in their colocated, monolithic worlds. And then came Cloud. I'm hoping that as an industry we can leverage most of what we've been implementing over the past few decades, but what architecture is most conducive to being able to take advantage of the benefits Cloud may bring? I don't think we're quite there yet to be able to answer that question in its entirety, but I do believe that it will be based on an architecture that utilises services. So if we're discussing frameworks for developing and deploying applications to the cloud we need to be thinking that those core capabilities needed by the application (transactions, security, naming etc.) will be remote, i.e., services, and they may even be located on completely different cloud infrastructures.
Now this may be obvious to some, particularly given discussions around PaaS and SaaS, but I'm not so sure everyone agrees, given what I've seen, heard and read over the past months. What I'm particularly after is a services architecture that CORBA espoused but which many people overlooked or didn't realise was possible, particularly if they spoke with the large CORBA vendors at the time: an architecture where the services could be from heterogeneous vendors as well as being based on different implementation languages. This is something that will be critical for the cloud, as vendors and providers come and go, and applications need to choose the right service implementation dynamically. The monolithic approach won't work here, particularly if those services may need to reside on completely different cloud infrastructures (cf CORBA ORB). I'm hoping we don't need to spend a few years trying to shoehorn monoliths in to this only to have that Eureka moment!
The lack of standards in the area will likely impact interoperability and portability in the short term, but once standards do evolve those issues should be alleviated somewhat. The increasing use of REST should help immediately too though.
Tuesday, April 20, 2010
Some things really shouldn't be changed
There are some things that shouldn't change, no matter how good an idea it may seem at first glance. For instance, the original Coke formula, the name of Coco Pops, and remaking the Searchers. Then again there are some things that really do benefit from a revision, such as Battlestar Galactica or the laws of gravity.
So it was with some trepidation that I heard they were going to remake The Prisoner. The original is a classic of 1960's TV that stood the test of time. I remember watching it at every opportunity while growing up (in the days when we only had 4 TV channels, so repeats were few and far between). Patrick McGoohan was The Prisoner and while the stories were often "out there", the series had this pulling power that made it unmissable.
I wondered how anyone could remake it and capture the essence of the original show? But I decided to be open minded and sat down the other night to watch the first episode of the new series. Afterwards the first thought I had was "I'll never get that hour back again!" As a remake, it was terrible. As a stand-alone series, it was probably passable.
It looks like another good idea bites the dust. I'll be taking the series off my Sky+ reminder now and if I'm stuck for something to do at that time I'll either watch some wood warp or maybe watch some paint dry: both would be far more stimulating activities!
So it was with some trepidation that I heard they were going to remake The Prisoner. The original is a classic of 1960's TV that stood the test of time. I remember watching it at every opportunity while growing up (in the days when we only had 4 TV channels, so repeats were few and far between). Patrick McGoohan was The Prisoner and while the stories were often "out there", the series had this pulling power that made it unmissable.
I wondered how anyone could remake it and capture the essence of the original show? But I decided to be open minded and sat down the other night to watch the first episode of the new series. Afterwards the first thought I had was "I'll never get that hour back again!" As a remake, it was terrible. As a stand-alone series, it was probably passable.
It looks like another good idea bites the dust. I'll be taking the series off my Sky+ reminder now and if I'm stuck for something to do at that time I'll either watch some wood warp or maybe watch some paint dry: both would be far more stimulating activities!
Monday, April 19, 2010
Volcano activity spoils conference
I was an invited speaker as the first DoD sponsored SOA Symposium last year and really enjoyed it. I got invited to this year's event and was going to speak on SOA and REST. I think it would have been as good as last year (will be as good as last year), but unfortunately a certain volcano in Iceland has meant that my flights have been canceled. So I'll have to watch from afar and hope that the troubles in the sky clear up soon. My best wishes go to the event organizers and I hope that next year presents an opportunity to attend for a second time.