Very well done! A bonus Christmas present. Fix the security issue(s) rather than ignore them and attempt to silence those who might call attention to them. OK, the latter might be cheaper and easier in the short term, but as a user I'd much prefer the former was the avenue explored!
And the thesis can be obtained from here.
I work for Red Hat, where I lead JBoss technical direction and research/development. Prior to this I was SOA Technical Development Manager and Director of Standards. I was Chief Architect and co-founder at Arjuna Technologies, an HP spin-off (where I was a Distinguished Engineer). I've been working in the area of reliable distributed systems since the mid-80's. My PhD was on fault-tolerant distributed systems, replication and transactions. I'm also a Professor at Newcastle University and Lyon.
Tuesday, December 28, 2010
Friday, December 24, 2010
I must be a traditionalist?
Two of my favourite movies at any time of year, but particularly at Christmas, are Scrooge with Alastair Sim (the definitive version IMO, with only Scrooged coming close) and It's a Wonderful Life. Both in black and white as they were intended originally. So what happens this year? The TV is showing colourised versions of them! No, no, no, no, no! That is wrong on so many levels! Almost as bad as "dogs and cats living together".
Now it's entirely likely that if colour film had existed when both movies were made then the directors would have chosen it and those movies would likely look and feel different to their colourised cousins. But the fact is that it wasn't and the film director's, writers etc. used the black and white medium to great effect. It's part of the atmosphere of the movies and is so ingrained in the making of them that when you watch you don't notice it's not in colour. And when you see it in colour today it's just wrong. Almost like watching a program or film based on something that was intended originally for radio. The act of moving it from one medium to another alters it, but rarely do the people involved in that alteration seem to understand that or take it into account.
Colourising films is a great example of the adage that "just because you can do something doesn't mean you should"! So if you have a choice between seeing the above two movies in colour or black and white, I recommend sticking with the original. You won't be disappointed.
Now it's entirely likely that if colour film had existed when both movies were made then the directors would have chosen it and those movies would likely look and feel different to their colourised cousins. But the fact is that it wasn't and the film director's, writers etc. used the black and white medium to great effect. It's part of the atmosphere of the movies and is so ingrained in the making of them that when you watch you don't notice it's not in colour. And when you see it in colour today it's just wrong. Almost like watching a program or film based on something that was intended originally for radio. The act of moving it from one medium to another alters it, but rarely do the people involved in that alteration seem to understand that or take it into account.
Colourising films is a great example of the adage that "just because you can do something doesn't mean you should"! So if you have a choice between seeing the above two movies in colour or black and white, I recommend sticking with the original. You won't be disappointed.
Thursday, December 23, 2010
A late Christmas present?
It's probably too late to expect this for Christmas, but I think a reprint of Geoffrey Hoyle's book is definitely on the cards for the new year!
Wednesday, December 22, 2010
End of an era
It's a bit of a sad day today. For over 10 years I've been using an HP Jornada 720 for a lot more than it was originally intended. But since I got it in 2000 it has become harder and harder to interface it with any of my desktop or laptop machines. You tend not to find machines today with IR or RS232 ports! And getting up-to-date versions of Java for it, for example, have proven difficult, to say the least!
Ultimately over the last couple of years I've been using it as a glorified address book. But as I use my smartphone more, or the built-in equivalent applications on my laptop, resorting to the Jornada became more and more of a hassle. So despite the fact that it is the machine I've used the longest over the past 30 years, I've decided to retire it and it is now resting comfortably in the loft. Maybe, just maybe, I'll resurrect it if the need arises. Until then I'll have fond memories of one of the best and most versatile machines HP ever built!
Ultimately over the last couple of years I've been using it as a glorified address book. But as I use my smartphone more, or the built-in equivalent applications on my laptop, resorting to the Jornada became more and more of a hassle. So despite the fact that it is the machine I've used the longest over the past 30 years, I've decided to retire it and it is now resting comfortably in the loft. Maybe, just maybe, I'll resurrect it if the need arises. Until then I'll have fond memories of one of the best and most versatile machines HP ever built!
Monday, December 20, 2010
JUDCon 2011 in Boston
I've been thinking about tracks for JUDCon next year and posted a preliminary Call for Presentations earlier today. Take a look if you're at all interested in JBoss as a developer or as a user.
IO
Recently I've been reading up on IO the language, not to be confused with IO the moon of Jupiter. It's a very interesting prototype language, based on very few concepts (everything's either an object or a message, objects have slots which are either attributes or methods). It reminds me a lot of Smalltalk. But what surprises me the most is that it is such an extremely simple language and yet it's possible to do some extremely complex things with it. Although this wasn't one of my original pet projects for Christmas, I think it will be now!
I'm not sure if I'll take it further than that, but it's sufficiently different to other languages I've been playing with recently that it may well get some quality time from me as I figure out exactly what it's good for and where its limitations reside. If you're looking for something different in the language arena then take a look too. It won't take long to get to grips with the basics and you might well be pleasantly surprised by what you find.
I'm not sure if I'll take it further than that, but it's sufficiently different to other languages I've been playing with recently that it may well get some quality time from me as I figure out exactly what it's good for and where its limitations reside. If you're looking for something different in the language arena then take a look too. It won't take long to get to grips with the basics and you might well be pleasantly surprised by what you find.
Sunday, December 19, 2010
Data on the outside (of the public Cloud) versus data on the inside (of the public Cloud)
A few years back Pat Helland wrote a nice paper about the issues of data outside and inside the services boundary and how the locale impacts traditional transaction semantics. This was around the time that there was a lot of work and publicity around extended transaction models. Pat used an analogy to relativity to drive home his point, whereas I'd been using quantum mechanics analogies. But regardless of how it is explained, the idea is that data and where it is located, is ultimately the bottleneck to scalability if total consistency is required (in a finite period of time).
So what's this got to do with Cloud? Well I've been thinking a lot about Cloud for quite a while (depending on your definition of Cloud, it could stretch back decades!) There are many problem areas that need to be worked out for the Cloud to truly succeed, including security, fault tolerance, reliability and performance. Some of these are related directly to why I believe private Clouds will be the norm for most applications.
But probably the prime reason why I think public Clouds won't continue to hog the limelight for enterprise or mission critical applications is the data issue. Processor speeds continue to increase (well, maybe not individual processor speeds, but when stuck on the same chip the result is the same), memory speeds increase, even disk speeds are increasing when you factor in the growing use of solid state. Network speeds have always lagged behind and that continues to be the case. If you don't have much data/information, or most of it is volatile and generated as part of the computation, then moving to a public cloud may make sense (assuming the problems I mentioned before are resolved). But for those users and applications that have copious amounts of data, moving it into the public cloud is going in the wrong direction: moving the computation to the data makes far more sense.
Now don't get me wrong. I'm not suggesting that public clouds aren't useful and won't remain useful. It's how they are used that will change for certain types of application. Data has always been critical to businesses and individuals, whether you're the largest online advertiser trying to amass as much information as possible, or whether you're just looking to maintain your own financial data. Trusting who to share that information with is always going to be an issue because it is related directly to control: do you control your own information when you need it or do you rely on someone else to help (or get in the way)? I suspect that for the vast majority of people/companies, the answer will be that they want to retain control over their data. (Over the years that may change, in the same way that banks became more trusted versus keeping your money under the mattress; though perhaps the bank analogy and trust isn't such a good one these days!)
Therefore, I think where the data will reside will define the future of cloud and that really means private cloud. Public clouds will be useful for cloud bursting and number crunching for certain types of application, and of course there will be users who can commit entirely to the public cloud (assuming all of the above issues are resolved) because they don't have critical data stores in which they rely or because they don't have existing infrastructure that can be turned into a protected (private) cloud. So what I'm suggesting is that data on the outside of the public cloud, i.e., within the private cloud, will dominate over data on the inside of the public cloud. Of course only time will tell, but if I were a betting man ...
So what's this got to do with Cloud? Well I've been thinking a lot about Cloud for quite a while (depending on your definition of Cloud, it could stretch back decades!) There are many problem areas that need to be worked out for the Cloud to truly succeed, including security, fault tolerance, reliability and performance. Some of these are related directly to why I believe private Clouds will be the norm for most applications.
But probably the prime reason why I think public Clouds won't continue to hog the limelight for enterprise or mission critical applications is the data issue. Processor speeds continue to increase (well, maybe not individual processor speeds, but when stuck on the same chip the result is the same), memory speeds increase, even disk speeds are increasing when you factor in the growing use of solid state. Network speeds have always lagged behind and that continues to be the case. If you don't have much data/information, or most of it is volatile and generated as part of the computation, then moving to a public cloud may make sense (assuming the problems I mentioned before are resolved). But for those users and applications that have copious amounts of data, moving it into the public cloud is going in the wrong direction: moving the computation to the data makes far more sense.
Now don't get me wrong. I'm not suggesting that public clouds aren't useful and won't remain useful. It's how they are used that will change for certain types of application. Data has always been critical to businesses and individuals, whether you're the largest online advertiser trying to amass as much information as possible, or whether you're just looking to maintain your own financial data. Trusting who to share that information with is always going to be an issue because it is related directly to control: do you control your own information when you need it or do you rely on someone else to help (or get in the way)? I suspect that for the vast majority of people/companies, the answer will be that they want to retain control over their data. (Over the years that may change, in the same way that banks became more trusted versus keeping your money under the mattress; though perhaps the bank analogy and trust isn't such a good one these days!)
Therefore, I think where the data will reside will define the future of cloud and that really means private cloud. Public clouds will be useful for cloud bursting and number crunching for certain types of application, and of course there will be users who can commit entirely to the public cloud (assuming all of the above issues are resolved) because they don't have critical data stores in which they rely or because they don't have existing infrastructure that can be turned into a protected (private) cloud. So what I'm suggesting is that data on the outside of the public cloud, i.e., within the private cloud, will dominate over data on the inside of the public cloud. Of course only time will tell, but if I were a betting man ...
Saturday, December 18, 2010
It's Christmas!
In the immortal words of the great Noddy Holder and Slade ... It's Christmas (or will be very soon!) I have a couple of weeks off due to not taking much vacation throughout 2010 (which is a pattern I seem to follow every year). So as is usual for any vacation (at least mine), I've written a list, checked it twice, ruled out those things that are naughty, leaving only those that are nice and come up with way more things that I can do in the time available. But hey, it's better to be over active than under!
So what's in the list? Well there's the usual smattering of work related efforts. But I've tried to make the majority of them "pet projects". Of course they'll probably have some impact on work eventually, but that's not the reason for them at this point. Top of my list is doing some refreshers on some languages I haven't had a chance to use much recently, including Ruby and Erlang. Then I've got a paper I want to write on REST and transactions with Bill and Mike (OK, so this is work related, but I do like writing papers, so it almost doesn't count). I want to do some work on JavaSim too and if there's time, get back to my STM effort.
Lots to do. Lots to look forward to. Oh and then there's Christmas too! I love this time of year.
So what's in the list? Well there's the usual smattering of work related efforts. But I've tried to make the majority of them "pet projects". Of course they'll probably have some impact on work eventually, but that's not the reason for them at this point. Top of my list is doing some refreshers on some languages I haven't had a chance to use much recently, including Ruby and Erlang. Then I've got a paper I want to write on REST and transactions with Bill and Mike (OK, so this is work related, but I do like writing papers, so it almost doesn't count). I want to do some work on JavaSim too and if there's time, get back to my STM effort.
Lots to do. Lots to look forward to. Oh and then there's Christmas too! I love this time of year.
Wednesday, December 15, 2010
Another CloudBees acquisition
This is *not* a CloudBees blog, but yet again I have to say congratulations to Sacha and the team for their latest announcement! Very impressive move.
Sunday, December 12, 2010
Wednesday, December 08, 2010
Cloud 2.0? Give me a break!
Over 4 years ago a group of analysts and vendors got together to try and rally around the term SOA 2.0. In an attempt to stop that in its tracks, I said a few things that maybe played a smaller part in helping to show the problems behind the term. I haven't seen or heard of SOA 2.0 much since, so maybe the community effort helped to bring a little sense to the world.
Unfortunately it seems that adding 2.0 to something is still a favourite pastime for those that either can't figure out a good name, or simply don't understand why it works for Web 2.0. With today's announcement that Salesforce have bought Heroku, it seems that we've entered the world of 'Cloud 2'! Oh come on! Let's inject some reality into this again, before it gets jumped on by other vendors or analysts that believe an increment to a term really makes a difference when the technology or architecture hasn't actually evolved.
Apparently Cloud 2 is oriented around social, mobile and real-time. So is this hard real-time, soft real-time, or some other form of real-time, given that when you're using applications in the Cloud today the response is happening in your frame of reference and within your lifetime!
I do believe that the current perception of Cloud is limited to servers and that does need to change, but that can be sorted out by having a true architectural definition of Cloud that is agreed by everyone. But there's no need to call it Cloud 2. In fact that just adds more confusion!
And social aspects of Cloud? Well I'm not a big fan of social networks; I think they're anti-social: go down the pub to see your friends, don't chat to them on IRC!
So let's get a grip on reality! Sticking 2 on the end in the hopes that it'll help will only do the opposite. And as I said for SOA 2.0, if this "social, real-time and mobile" Cloud really is different than what we're getting used to today, then coin a proper term for it, e.g., The Social Cloud.
Unfortunately it seems that adding 2.0 to something is still a favourite pastime for those that either can't figure out a good name, or simply don't understand why it works for Web 2.0. With today's announcement that Salesforce have bought Heroku, it seems that we've entered the world of 'Cloud 2'! Oh come on! Let's inject some reality into this again, before it gets jumped on by other vendors or analysts that believe an increment to a term really makes a difference when the technology or architecture hasn't actually evolved.
Apparently Cloud 2 is oriented around social, mobile and real-time. So is this hard real-time, soft real-time, or some other form of real-time, given that when you're using applications in the Cloud today the response is happening in your frame of reference and within your lifetime!
I do believe that the current perception of Cloud is limited to servers and that does need to change, but that can be sorted out by having a true architectural definition of Cloud that is agreed by everyone. But there's no need to call it Cloud 2. In fact that just adds more confusion!
And social aspects of Cloud? Well I'm not a big fan of social networks; I think they're anti-social: go down the pub to see your friends, don't chat to them on IRC!
So let's get a grip on reality! Sticking 2 on the end in the hopes that it'll help will only do the opposite. And as I said for SOA 2.0, if this "social, real-time and mobile" Cloud really is different than what we're getting used to today, then coin a proper term for it, e.g., The Social Cloud.
Tuesday, November 30, 2010
More sad news!
I met Maurice Wilkes a few times when he came to the University to talk. So it's sad to hear that he has passed away. Definitely a pioneer and definitely a very nice man. A sad day indeed.
Monday, November 29, 2010
Android woes coming to an end!
My time as an Android user is drawing to a close. I may come back when they sort things out so that I am no longer at the mercy of the vague policies from then handset manufacturers or operators for upgrades, when in reality they really want me to buy a new phone and contact every 6 months in order to get even security upgrades.
It was fun whilst it lasted and even with the benefit of hindsight I think it was the right choice at the time. However, it has left a bad taste and Google needs to sort out the mess. Until then I'll probably move to an iPhone. And I can't really think of a better way of putting it than Douglas Adams: So Long, And Thanks for All the Fish!
It was fun whilst it lasted and even with the benefit of hindsight I think it was the right choice at the time. However, it has left a bad taste and Google needs to sort out the mess. Until then I'll probably move to an iPhone. And I can't really think of a better way of putting it than Douglas Adams: So Long, And Thanks for All the Fish!
Sunday, November 21, 2010
Management and PaaS
Many years ago I spent a few years working for HP as a Distinguished Engineer via their acquisition of Bluestone. Now I learnt a lot through that experience and one of those things was the importance of system management software such as OpenView. In truth I already knew, having done research with HP, IBM and others over the years prior to this; but spending quality time with the team was still informative and illuminating.
Managing middleware systems is a complex task, particularly if you want to do it efficiently and without adversely impacting the system that is being monitored/managed. (Almost like Schrodinger's Cat experiment!) But for large scale, dynamic systems such as those I mentioned previously, good monitoring and management is critical. This remains true if you remove the dynamic/autonomous aspects and assume that a human system administrator will be using the information provided.
This shouldn't come as a surprise. Even your favourite operating system has enjoyed monitoring and management capabilities of various quality over the decades. The good implementations tend to remain in the background doing their job and you don't even know they are there. And of course as soon as you add in the distributed factor, the need for good monitoring and management cannot be over stated.
But of course you can do without; it's just that you may then have to put up with sub-par performance or efficiency as you need to manually cope with failures, changes in network or machine characteristics, etc. Worse still, issues may go unnoticed until far too late, e.g., the lack of orphan detection can lead to multiple competing services corrupting data or conflicting with one another.
So why am I mentioning this? Is it because it's not obvious, or that I'm running out of interesting things to discuss? Not quite! It should be clear that management/monitoring software is important for middleware solutions (and others). Depending upon what you consider to be monitoring and management, it may be a core component of middleware, or a component in middleware (e.g., the previous example on orphan detection and elimination.) But if you're looking at a separate monitoring/management component or infrastructure, then you wouldn't consider that to be your development platform for arbitrary applications, right? You wouldn't, for instance, consider such a software system to be a replacement for your middleware platform, e.g., for CORBA or Java EE.
Over the years I've never heard anyone suggest such a thing, so this isn't something I've even considered people would think. However, recently this has come up in several conversations I've had. And yes, it's in the area of Cloud and specifically PaaS. The statements have been made that PaaS is the monitoring/management, or is all about the monitoring/management. Why would that be any more sensible a statement or position to take in the Cloud world if it wasn't sensible prior to Cloud? I really don't know. It doesn't make sense any more than ...
"Why would a Wookiee, an eight-foot tall Wookiee, want to live on Endor, with a bunch of two-foot tall Ewoks? That does not make sense! But more important, you have to ask yourself: What does this have to do with this case? Nothing. Ladies and gentlemen, it has nothing to do with this case! It does not make sense! Look at me. I'm a lawyer defending a major record company, and I'm talkin' about Chewbacca! Does that make sense? Ladies and gentlemen, I am not making any sense! None of this makes sense! And so you have to remember, when you're in that jury room deliberatin' and conjugatin' the Emancipation Proclamation, does it make sense? No! Ladies and gentlemen of this supposed jury, it does not make sense! If Chewbacca lives on Endor, you must acquit! The defense rests."
With PaaS we're talking about a Platform (the first letter in the acronym kind of gives it away). Now unless you're only developing applications, services or components for your favourite monitoring/management infrastructure, you'll probably consider your platform to be something else entirely, e.g., Java Enterprise Edition with all of its capabilities such as transactions, EJB3, POJOs, messaging, persistence etc. This (as an example) would be your definition of your platform. Very few people would consider OpenView (or whatever they're calling it these days) as their platform. So although something like OpenView, or even JON, are important to any platform, I certainly wouldn't consider them to be the platform.
Or maybe I'm missing something?!
Managing middleware systems is a complex task, particularly if you want to do it efficiently and without adversely impacting the system that is being monitored/managed. (Almost like Schrodinger's Cat experiment!) But for large scale, dynamic systems such as those I mentioned previously, good monitoring and management is critical. This remains true if you remove the dynamic/autonomous aspects and assume that a human system administrator will be using the information provided.
This shouldn't come as a surprise. Even your favourite operating system has enjoyed monitoring and management capabilities of various quality over the decades. The good implementations tend to remain in the background doing their job and you don't even know they are there. And of course as soon as you add in the distributed factor, the need for good monitoring and management cannot be over stated.
But of course you can do without; it's just that you may then have to put up with sub-par performance or efficiency as you need to manually cope with failures, changes in network or machine characteristics, etc. Worse still, issues may go unnoticed until far too late, e.g., the lack of orphan detection can lead to multiple competing services corrupting data or conflicting with one another.
So why am I mentioning this? Is it because it's not obvious, or that I'm running out of interesting things to discuss? Not quite! It should be clear that management/monitoring software is important for middleware solutions (and others). Depending upon what you consider to be monitoring and management, it may be a core component of middleware, or a component in middleware (e.g., the previous example on orphan detection and elimination.) But if you're looking at a separate monitoring/management component or infrastructure, then you wouldn't consider that to be your development platform for arbitrary applications, right? You wouldn't, for instance, consider such a software system to be a replacement for your middleware platform, e.g., for CORBA or Java EE.
Over the years I've never heard anyone suggest such a thing, so this isn't something I've even considered people would think. However, recently this has come up in several conversations I've had. And yes, it's in the area of Cloud and specifically PaaS. The statements have been made that PaaS is the monitoring/management, or is all about the monitoring/management. Why would that be any more sensible a statement or position to take in the Cloud world if it wasn't sensible prior to Cloud? I really don't know. It doesn't make sense any more than ...
"Why would a Wookiee, an eight-foot tall Wookiee, want to live on Endor, with a bunch of two-foot tall Ewoks? That does not make sense! But more important, you have to ask yourself: What does this have to do with this case? Nothing. Ladies and gentlemen, it has nothing to do with this case! It does not make sense! Look at me. I'm a lawyer defending a major record company, and I'm talkin' about Chewbacca! Does that make sense? Ladies and gentlemen, I am not making any sense! None of this makes sense! And so you have to remember, when you're in that jury room deliberatin' and conjugatin' the Emancipation Proclamation, does it make sense? No! Ladies and gentlemen of this supposed jury, it does not make sense! If Chewbacca lives on Endor, you must acquit! The defense rests."
With PaaS we're talking about a Platform (the first letter in the acronym kind of gives it away). Now unless you're only developing applications, services or components for your favourite monitoring/management infrastructure, you'll probably consider your platform to be something else entirely, e.g., Java Enterprise Edition with all of its capabilities such as transactions, EJB3, POJOs, messaging, persistence etc. This (as an example) would be your definition of your platform. Very few people would consider OpenView (or whatever they're calling it these days) as their platform. So although something like OpenView, or even JON, are important to any platform, I certainly wouldn't consider them to be the platform.
Or maybe I'm missing something?!
Friday, November 19, 2010
REST + transactions paper
We've been working on REST+transactions for a long time now, with a number of iterations and implementations spanning over a decade. After various discussions over the past few months I'm hoping that a few new organizations (academic and industrial) will be coming forward in the near future to help contribute to this work. However, I think it's time we wrote a paper on this and got it published. We've got enough material and experience to target a range of conferences and workshops. Plus I haven't written a paper in a while: I'm starting to feel rusty!
Iconic distributed systems research comes back around
Distributed systems research has been going on since the very first time someone decided to network multiple computers. Industry and academia have shared this burden and we're here today because of many different people and organisations. Some of this work is often referenced and built on, such as Lamport's paper on Time, Clocks and the Ordering of Events in a Distributed System, or RFC 707, concerning RPCs. But some of it, such as Stuart's work on Coloured Actions, or much work on weak consistency replication.
However, sometimes it's simply a matter of timing, with some research happening before it's really needed or truly appreciated. Case in point is a lot of the work that we saw produced during the mid 1990's on configurable distributed system and particularly that presented and documented by the IEEE Conference on Configurable Distributed Systems (there were other workshops and institutions doing similar work, but this conference was one that I had personal knowledge about since I had several papers published there over the years). Much of this work concerned autonomous systems that reacted to change, e.g., the failure of machines or networks, or increased work load on a given machine that prevented it from meeting performance metrics. Some of these systems could then dynamically adapt to the changes and reconfigure themselves, e.g., by spinning up new instances of services elsewhere to move the load, or route messages to alternative machines or via alternate routes to the original destination thus bypassing network partitions.
This is a gross simplification of the many and varied techniques that were discussed and developed almost two decades ago to provide systems that required very little manual intervention, i.e., they were almost entirely autonomous (in theory, if not always in practice). With the growing popularity of all things Cloud related, these techniques and ideas are extremely important. If Cloud (whether public or private) is to be differentiated from, say, virtualizing infrastructure in IT departments, then autonomous monitoring, management and reconfiguration is critical to ensure that developers can have on-demand access to the compute resources they need and that the system can ensure those resources are performing according to requirements. This needs to happen dynamically and be driven by the system itself in most cases because there should be little/no involvement by your friendly neighbourhood system administrator (in fact in some cases such an individual may not exist!)
I'm hoping that just because Cloud didn't exist as an identifiable concept back in the 1990's, people and organizations today don't overlook the fact that relevant R&D happened back then. Reworking and retasking some of this prior work could help save us a lot of time and effort, even if it's just to convince engineers today that certain paths or possible solutions aren't viable. For a start I know that I'll be getting my copies of those proceedings out again to refresh my memory!
However, sometimes it's simply a matter of timing, with some research happening before it's really needed or truly appreciated. Case in point is a lot of the work that we saw produced during the mid 1990's on configurable distributed system and particularly that presented and documented by the IEEE Conference on Configurable Distributed Systems (there were other workshops and institutions doing similar work, but this conference was one that I had personal knowledge about since I had several papers published there over the years). Much of this work concerned autonomous systems that reacted to change, e.g., the failure of machines or networks, or increased work load on a given machine that prevented it from meeting performance metrics. Some of these systems could then dynamically adapt to the changes and reconfigure themselves, e.g., by spinning up new instances of services elsewhere to move the load, or route messages to alternative machines or via alternate routes to the original destination thus bypassing network partitions.
This is a gross simplification of the many and varied techniques that were discussed and developed almost two decades ago to provide systems that required very little manual intervention, i.e., they were almost entirely autonomous (in theory, if not always in practice). With the growing popularity of all things Cloud related, these techniques and ideas are extremely important. If Cloud (whether public or private) is to be differentiated from, say, virtualizing infrastructure in IT departments, then autonomous monitoring, management and reconfiguration is critical to ensure that developers can have on-demand access to the compute resources they need and that the system can ensure those resources are performing according to requirements. This needs to happen dynamically and be driven by the system itself in most cases because there should be little/no involvement by your friendly neighbourhood system administrator (in fact in some cases such an individual may not exist!)
I'm hoping that just because Cloud didn't exist as an identifiable concept back in the 1990's, people and organizations today don't overlook the fact that relevant R&D happened back then. Reworking and retasking some of this prior work could help save us a lot of time and effort, even if it's just to convince engineers today that certain paths or possible solutions aren't viable. For a start I know that I'll be getting my copies of those proceedings out again to refresh my memory!
Tuesday, November 09, 2010
Nice move by Sacha and Cloudbees
From Devoxx ...
"CloudBees, the new company from former JBoss CTO Sacha Labourey, today consolidated its position as the de facto source for Hudson-based continuous integration solutions, from on-premise to the cloud with the acquisition of InfraDNA, the company founded by Hudson creator Kohsuke Kawaguchi to provide software, support and services for Hudson. Kawaguchi joins CloudBees, and will continue to grow and lead the Hudson project. The first release today from the integrated company is Nectar 1.0, formerly InfraDNA’s Certified Hudson for Continuous Integration (ICHCI) offering, which has been enhanced with new features and rebranded for CloudBees. Enhancements include VMware Virtual Machine auto configuration and deployment; enhanced backup services; pre-bundled and configured plug-ins; and an auto-update service. Nectar annual subscriptions start at $3,000 and include free minutes on CloudBees DEV@cloud service, which is currently in beta."
Congratulations!
"CloudBees, the new company from former JBoss CTO Sacha Labourey, today consolidated its position as the de facto source for Hudson-based continuous integration solutions, from on-premise to the cloud with the acquisition of InfraDNA, the company founded by Hudson creator Kohsuke Kawaguchi to provide software, support and services for Hudson. Kawaguchi joins CloudBees, and will continue to grow and lead the Hudson project. The first release today from the integrated company is Nectar 1.0, formerly InfraDNA’s Certified Hudson for Continuous Integration (ICHCI) offering, which has been enhanced with new features and rebranded for CloudBees. Enhancements include VMware Virtual Machine auto configuration and deployment; enhanced backup services; pre-bundled and configured plug-ins; and an auto-update service. Nectar annual subscriptions start at $3,000 and include free minutes on CloudBees DEV@cloud service, which is currently in beta."
Congratulations!
Saturday, November 06, 2010
PaaS is language specific?
Recently I've been involved in some discussions with people on the subject of PaaS and specifically how the application language impacts it. I was surprised to hear (or read, since some of this was via email) several of the participants assume that you would need a PaaS implementation for each programming language. So that'd be a Java PaaS, a Ruby PaaS, presumably a C++ PaaS and maybe even a COBOL PaaS.
In my most diplomatic manner, I asked if certain individuals had missed the last 20+ years of middleware development; perhaps they had been stuck on some island somewhere, or maybe off on some rocket traveling close to the speed of light and while we experienced 20 years, for them only a few days have passed. See, I do like to give people the benefit of the doubt where possible!
The idea that we need to have a PaaS implementation for all possible programming languages out there (or at least those that may be used in the Cloud), is ridiculous. Think of it like this: if we can virtualize away the hardware, so it's really no longer a major issue in deployment, then we should be able to virtualize (abstract away) the infrastructure upon which the applications will rely. And yes, that infrastructure is middleware.
Oh wait ... we've done this before. Several times in fact! I won't cover them all, but let's just consider a couple of the obvious examples. First, CORBA, where services could be implemented in a range of different programming languages (e.g., C++, Java and even COBOL!) and made available to users that may be written in a completely different language. Second, Java, and specifically J(2)EE. Like it or not, despite the write once run anywhere message that came with Java, many vendors and users simply did not want to, or could not, discard their existing investments in other languages and so found ways to integrate services and components written outside of Java and often without the knowledge of the end users. For instance, there are non-Java transaction services that are integrated into application servers through the JTA. The same goes for JMS.
As an industry we have decades of experience in virtualizing the middleware layer. It doesn't matter if you're writing your applications and services in Java, Ruby or something else entirely: as long as someone provides a binding to the underlying middleware capabilities, you don't expect, and shouldn't be interested in, whether or not the middleware, or PaaS, is implemented in the same language. And in fact it shouldn't be. I'll keep saying this, but we simply cannot afford to stop the world and reinvent the middleware wheel for PaaS. We need to evolve, but we really don't need to start from scratch. I think I got my message across during the aforementioned conversations, but maybe next time I'll resort to drawing some diagrams, perhaps even with some timelines dating back to the 1960's!
Addendum: In case it wasn't clear in the above, of course there will need to be a language specific component to PaaS, but only that which is sufficient in order to allow the underlying implementations (written in whichever language is appropriate) to be interfaced with the application or service. Consider the JTA and JMS examples above.
In my most diplomatic manner, I asked if certain individuals had missed the last 20+ years of middleware development; perhaps they had been stuck on some island somewhere, or maybe off on some rocket traveling close to the speed of light and while we experienced 20 years, for them only a few days have passed. See, I do like to give people the benefit of the doubt where possible!
The idea that we need to have a PaaS implementation for all possible programming languages out there (or at least those that may be used in the Cloud), is ridiculous. Think of it like this: if we can virtualize away the hardware, so it's really no longer a major issue in deployment, then we should be able to virtualize (abstract away) the infrastructure upon which the applications will rely. And yes, that infrastructure is middleware.
Oh wait ... we've done this before. Several times in fact! I won't cover them all, but let's just consider a couple of the obvious examples. First, CORBA, where services could be implemented in a range of different programming languages (e.g., C++, Java and even COBOL!) and made available to users that may be written in a completely different language. Second, Java, and specifically J(2)EE. Like it or not, despite the write once run anywhere message that came with Java, many vendors and users simply did not want to, or could not, discard their existing investments in other languages and so found ways to integrate services and components written outside of Java and often without the knowledge of the end users. For instance, there are non-Java transaction services that are integrated into application servers through the JTA. The same goes for JMS.
As an industry we have decades of experience in virtualizing the middleware layer. It doesn't matter if you're writing your applications and services in Java, Ruby or something else entirely: as long as someone provides a binding to the underlying middleware capabilities, you don't expect, and shouldn't be interested in, whether or not the middleware, or PaaS, is implemented in the same language. And in fact it shouldn't be. I'll keep saying this, but we simply cannot afford to stop the world and reinvent the middleware wheel for PaaS. We need to evolve, but we really don't need to start from scratch. I think I got my message across during the aforementioned conversations, but maybe next time I'll resort to drawing some diagrams, perhaps even with some timelines dating back to the 1960's!
Addendum: In case it wasn't clear in the above, of course there will need to be a language specific component to PaaS, but only that which is sufficient in order to allow the underlying implementations (written in whichever language is appropriate) to be interfaced with the application or service. Consider the JTA and JMS examples above.
Tuesday, November 02, 2010
JCP EC (re) election
The results are in...the 2010 JCP EC Elections have officially concluded, and final results are available on jcp.org. The 2010 JCP EC Election ballot closed at midnight pacific time on 1 November. Congratulations to the new and re-elected JCP EC Members!
SE/EE EC
Ratified seats: Apache Software Foundation, Red Hat
Open Seat Election: Eclipse, Google
ME EC
Ratified seats: Research in Motion (RIM), Samsung, TOTVS
Open Seat Election: Apix, Stefano Andreani
New and re-elected members will take their seats on Tuesday, 16 November.
JCP members did not ratify Hologic for the SE/EE EC. As the JCP process document prescribes, the PMO will hold an additional ratification ballot for the remaining SE/EE seat soon.
Complete results will be available at the JCP Election page and the JCP blog later today.
SE/EE EC
Ratified seats: Apache Software Foundation, Red Hat
Open Seat Election: Eclipse, Google
ME EC
Ratified seats: Research in Motion (RIM), Samsung, TOTVS
Open Seat Election: Apix, Stefano Andreani
New and re-elected members will take their seats on Tuesday, 16 November.
JCP members did not ratify Hologic for the SE/EE EC. As the JCP process document prescribes, the PMO will hold an additional ratification ballot for the remaining SE/EE seat soon.
Complete results will be available at the JCP Election page and the JCP blog later today.
Personal versus company resources
A long time ago in a galaxy far far away ... OK, so maybe not necessarily that long ago and certainly not that far away, when work started to intrude on my personal love for computers, I made sure I had a separate machine for personal use (a desktop in those days) and one (or more) for work. That was fine for a while, but eventually it became too much of a hassle to synchronise email, source code etc. Plus, in those days there was very little distinction between my personal (for pleasure) efforts and my work. So multiple machines gave way to one machine. I also tried this with mobile phones (with one for personal use and one for work), but with much the same conclusion.
This situation existed for the best part of a decade and a half. However, in the past year I've been finding that work is intruding more and more on my leisure efforts around computing. Whether those are writing research papers or just playing around with new languages, I find it is harder to stop work (via email, IRC etc.) from distracting me. And the phone just makes it worse!
So recently I've reverted to the way things I used to do things, with a dedicated work machine that knows only about work related "stuff" (including email) and a dedicated personal machine, that knows nothing about work (and its emails). The phone problem is about to be fixed with a similar strategy. I wonder how long I'll be able to keep up this separation of concerns. But I'm looking forward to trying!
This situation existed for the best part of a decade and a half. However, in the past year I've been finding that work is intruding more and more on my leisure efforts around computing. Whether those are writing research papers or just playing around with new languages, I find it is harder to stop work (via email, IRC etc.) from distracting me. And the phone just makes it worse!
So recently I've reverted to the way things I used to do things, with a dedicated work machine that knows only about work related "stuff" (including email) and a dedicated personal machine, that knows nothing about work (and its emails). The phone problem is about to be fixed with a similar strategy. I wonder how long I'll be able to keep up this separation of concerns. But I'm looking forward to trying!
Wednesday, October 27, 2010
OSGi
Dear OSGi Alliance Members,
Thank you all for your votes in the 2010/2011 Board elections!
Along with many returning board companies, we would like to extend a warm welcome to RedHat (Mark Little and David Bosschaert) as a new BoD member. Mark is currently CTO of JBoss, a division of Red Hat. Prior to this he was Technical Development Manager for the JBoss SOA Platform and lead of the JBossESB and JBossTS projects. David is currently Principal Engineer at JBoss, a division of Red Hat working on Open Source OSGi products. Before joining Red Hat, David was a Fellow at Progress Software and Distinguished Engineer at IONA Technologies.
The 2011 Board of Directors will be composed as follows –
Deutsche Telekom AG – Hans-Werner Bitzer
IBM - Dan Bandera
NTT - Ryutaro Kawamura
Oracle - Anish Karmarkar
RedHat – Mark Little
Progress Software – Jamie Merrit
ProSyst Software GmbH - Susan Schwarze
SAP AG - Karsten Schmidt
Software AG - Prasad Yendluri
Telcordia Technologies - Stan Moyer
VMWare (SpringSource) - Peter Cooper-Ellis
We are looking forward to a productive year 2011!
Thank you all for your votes in the 2010/2011 Board elections!
Along with many returning board companies, we would like to extend a warm welcome to RedHat (Mark Little and David Bosschaert) as a new BoD member. Mark is currently CTO of JBoss, a division of Red Hat. Prior to this he was Technical Development Manager for the JBoss SOA Platform and lead of the JBossESB and JBossTS projects. David is currently Principal Engineer at JBoss, a division of Red Hat working on Open Source OSGi products. Before joining Red Hat, David was a Fellow at Progress Software and Distinguished Engineer at IONA Technologies.
The 2011 Board of Directors will be composed as follows –
Deutsche Telekom AG – Hans-Werner Bitzer
IBM - Dan Bandera
NTT - Ryutaro Kawamura
Oracle - Anish Karmarkar
RedHat – Mark Little
Progress Software – Jamie Merrit
ProSyst Software GmbH - Susan Schwarze
SAP AG - Karsten Schmidt
Software AG - Prasad Yendluri
Telcordia Technologies - Stan Moyer
VMWare (SpringSource) - Peter Cooper-Ellis
We are looking forward to a productive year 2011!
Monday, October 25, 2010
Paper reviews
After a fairly hectic week, this week begins with no company email (don't ask!) So this gives me a bit more time to do my paper reviewing for the Workshop on Engineering Service-Oriented Applications. I've got a couple of work on and they appear interesting at first glance. A happy alternative to the last few weeks.
Monday, October 04, 2010
Timezone issues
The past couple of weeks have been hectic. First I was at JavaOne, then I spent time around the west coast visiting customers and partners (talking a lot about Cloud, funnily enough). Then it was a red-eye to Boston and meetings for the rest of the week, before getting home in time for the weekend. Several timezones over two weeks played havoc with me and I missed a few calls due to my forgetting where in the world I was!
This week is slightly better, with meetings in Paris tomorrow, then Berlin for the SOA Symposium (where I'm giving a couple of presentations) and JUDCon (no planned sessions for me to give, though I may do a lightning talk). Back home in time for the weekend again. The next couple of weeks should be in and around our Newcastle offices! And maybe in between all of this I'll get a chance to do some coding!
This week is slightly better, with meetings in Paris tomorrow, then Berlin for the SOA Symposium (where I'm giving a couple of presentations) and JUDCon (no planned sessions for me to give, though I may do a lightning talk). Back home in time for the weekend again. The next couple of weeks should be in and around our Newcastle offices! And maybe in between all of this I'll get a chance to do some coding!
Tuesday, September 14, 2010
Blast from the past
I had no idea the University was scanning in old theses. I haven't seen this one for almost 20 years!
Wednesday, September 08, 2010
Cloud and multitenancy
Over the past year or so I've been doing a lot of thinking about various aspects of cloud. One of the ones I keep coming back to is the issue of multi-tenancy. Many years ago, when I was formulating the categorization of transactional replication strategies, we found that splitting up the object server (methods) from the state and allowing each to be replicated independently, gave the ability to express everything necessary to cover the range of replica consistency protocols from passive through active. Sharing of the executable versus sharing of the state is based on aspects such as determinism of execution (passive is the only option in some cases) and speed of fail-over (active tends to do much better).
So what has this to do with multitenancy? Well I've been thinking about what it means to have a true multitenant application and hence PaaS. Some people seem to suggest that multitenancy is somehow new, complex or the domain of a new breed of engineers and vendors. But if you think about it, we've been using multitenant environments for years. Your operating system is multitenant, with multiple applications resident and often running concurrently, perhaps modifying shared data structures as a result. Even your modest Web server can be considered multitenant. And there are a range of strategies for achieving multitenancy based on a similar server/state split. Therefore, whether you're a SaaS implementer or a PaaS architect, the same factors come into play. And critically for SaaS implementers, if your PaaS doesn't support these things then you're going to have to!
In terms of PaaS, there are really 3 components that can be played with in terms of multitenancy and application and data deployment. These are the VM, the application server (container of business logic) and the data (database, file system etc.) For instance, you could have each tenant in a separate application server but running on the same VM, with data split between different database instances. Or you could have each tenant in the same application server on the same VM, with data in the same database, perhaps split on different tables. (Other options and configurations exist, of course.)
Now all of these various combinations have more or less been around before, as I said earlier. The big problem though has been around ease of use. I've written enough Unix kernel services before, for instance, to know most of the ins-and-outs of sharing state without causing the operating system to crash, and the best way to use pthreads or memory mapped files in a shared environment, but it's not always intuitive or based on well documented processes. This is precisely where a PaaS fits in: it must support the widest range of approaches and in a way that does not require the SaaS developer to have to worry about them. We can learn a lot from what's been done in the past around the intricacies of achieving multitenancy, but we do need to make it far more consumable than has perhaps been the case so far.
So what has this to do with multitenancy? Well I've been thinking about what it means to have a true multitenant application and hence PaaS. Some people seem to suggest that multitenancy is somehow new, complex or the domain of a new breed of engineers and vendors. But if you think about it, we've been using multitenant environments for years. Your operating system is multitenant, with multiple applications resident and often running concurrently, perhaps modifying shared data structures as a result. Even your modest Web server can be considered multitenant. And there are a range of strategies for achieving multitenancy based on a similar server/state split. Therefore, whether you're a SaaS implementer or a PaaS architect, the same factors come into play. And critically for SaaS implementers, if your PaaS doesn't support these things then you're going to have to!
In terms of PaaS, there are really 3 components that can be played with in terms of multitenancy and application and data deployment. These are the VM, the application server (container of business logic) and the data (database, file system etc.) For instance, you could have each tenant in a separate application server but running on the same VM, with data split between different database instances. Or you could have each tenant in the same application server on the same VM, with data in the same database, perhaps split on different tables. (Other options and configurations exist, of course.)
Now all of these various combinations have more or less been around before, as I said earlier. The big problem though has been around ease of use. I've written enough Unix kernel services before, for instance, to know most of the ins-and-outs of sharing state without causing the operating system to crash, and the best way to use pthreads or memory mapped files in a shared environment, but it's not always intuitive or based on well documented processes. This is precisely where a PaaS fits in: it must support the widest range of approaches and in a way that does not require the SaaS developer to have to worry about them. We can learn a lot from what's been done in the past around the intricacies of achieving multitenancy, but we do need to make it far more consumable than has perhaps been the case so far.
Tuesday, September 07, 2010
Interesting new paper on transactions
I met Daniel Abadi at HPTS 2009 last year, where he presented on HadoopDB. Very interesting and a good presenter. So it's good to see his latest paper on determinism and transactions in database systems. It's a good read and particularly for me because it mixes the two areas that have always been my interests: transactions and replication. The authors have some good things to say about NoSQL, but specifically around relaxing ACID semantics, the trade-offs that incurs and how perhaps there is an alternative. Again, this is an area I've had a bit to do with over the past few decades. I'll have to think about how and whether this is applicable to some of the work we're doing at the moment in large scale data grids.
Monday, August 30, 2010
Coming of age?
Thousands of years ago the coming of age ritual involved hunting and killing (e.g., mammoths or dinosaurs, depending upon which film you believe). Due to changes in a range of things, including society and the fact that mammoths (as well as dinosaurs) died out (maybe related to hunting, strangely enough) the rite of passage changed.
In our household one aspect of our version of this ritual is introducing the kids (my 8 year old now) to the all time great movies. Several weeks ago it was Lord of the Rings, which went down really well. This weekend we started into Star Wars. Of course the first question was which order to play them? Against my better judgement we went with the chronological order, i.e., Episode 1 first. The first two movies went down well with him, though the third was more of a slog, but of course the best movies are yet to come.
However, I realised that although it's been about 8 years since I've watched the movies, two things remain a constant for me: the midichlorian rubbish detracts from the story (what was George Lucas thinking?!) and I still can't stand Ja Ja Binks! But my son loved him! If he loves the Ewoks then all is lost and I think I'll fail him on this rite of passage!
In our household one aspect of our version of this ritual is introducing the kids (my 8 year old now) to the all time great movies. Several weeks ago it was Lord of the Rings, which went down really well. This weekend we started into Star Wars. Of course the first question was which order to play them? Against my better judgement we went with the chronological order, i.e., Episode 1 first. The first two movies went down well with him, though the third was more of a slog, but of course the best movies are yet to come.
However, I realised that although it's been about 8 years since I've watched the movies, two things remain a constant for me: the midichlorian rubbish detracts from the story (what was George Lucas thinking?!) and I still can't stand Ja Ja Binks! But my son loved him! If he loves the Ewoks then all is lost and I think I'll fail him on this rite of passage!
Friday, August 27, 2010
Long weekend coming up
It's a national holiday here on Monday, so this is a nice long weekend. Unlike my 2 week vacation which only ended at the start of last week, I've decided to not touch anything remotely work related until Tuesday (that holiday turned out to be > 50% work!) Well that's the theory at least. I know there may be some pressure to bend that rule slightly, so we'll wait and see!
CloudBees
It's been 18 months since Sacha left JBoss and I took over. He said he was going to take time off to enjoy being with his family and (then) new baby daughter. However, I think many of us knew that at the time it wouldn't be long before he was back doing something interested. So it's really good to be able to announce that that effort is called CloudBees and they've officially launched! Hudson as a Service (HaaS) is a really good idea! Coincidentally it's something that JBoss and Arjuna separately considered at one time or another with their/our Distributed Test Framework.
Good luck Sacha and team!
Good luck Sacha and team!
Wednesday, August 25, 2010
BBC Micro making a comeback?
I loved this article! It brought back a lot of memories from when I was programming in BBC Basic. I upgraded my machine to support Metacomco's Pascal and C as well. Lots of fun and I like to think the "hardships" that the students of today mentioned helped to make me appreciate what it takes to write code in less than 32K of memory!
Wednesday, August 18, 2010
Classic C++ textbook
Every now and again I have to go through my cache of books and box some of them up for storage. This time it was some of my old text books from university, but while going through the process I went through all of my C++ related books. One of the best I've ever read is Barton and Nackman. I got this book just after it came out (I think I got it just after visiting Graeme at Transarc.) It's a wonderful book and reading the Amazon comments afterwards it's good to see that I'm not alone in thinking that. If you're a C++ programmer then you should definitely check it out.
Monday, August 09, 2010
Private Clouds?
I've known Werner Vogels for several decades, ever since we were both doing our PhDs. Like all good friends and scientists, we don't always agree on everything. Case in point is that Werner doesn't believe Private Clouds are clouds and I think his arguments against are artificial and short sighted. Now of course you could say that he and I take our perspectives on this debate precisely because of our employers. However, that's wrong, at least where I'm concerned.
As I said earlier this year, I think that today's definition of Cloud is limiting and emphasizes Public Cloud precisely because that's what most people have access to. But I also believe that Public Clouds are not going to be as important in the future. Cloud is a natural evolution of hardware and software (middleware) but if you liken the roadmap for Cloud to that of cars, today's Cloud's are like the Model T: showing everyone the potential, but not available to the masses. We should be looking at the equivalent of the next hundred years of evolution in automotive technologies as far as Cloud is concerned, bringing their benefits to the masses (of people and workloads).
This development has to include Private Clouds (which, contrary to what Werner states, don't necessitate corporations having to buy more hardware), but so much more. The true cloud is the collection of processors that exist virtually everywhere you turn, including mobile devices and sensors. That's where the definition of Cloud must go. In many ways it's returning Cloud to one of its progenitors, ubiquitous computing. By that point there won't be a Public, Private or Personal Cloud, there'll be "just" Cloud (or maybe some other term). Where your application is hosted will still remain important, but not because of any artificial reasons due to words such as 'private' or 'public'.
As I said earlier this year, I think that today's definition of Cloud is limiting and emphasizes Public Cloud precisely because that's what most people have access to. But I also believe that Public Clouds are not going to be as important in the future. Cloud is a natural evolution of hardware and software (middleware) but if you liken the roadmap for Cloud to that of cars, today's Cloud's are like the Model T: showing everyone the potential, but not available to the masses. We should be looking at the equivalent of the next hundred years of evolution in automotive technologies as far as Cloud is concerned, bringing their benefits to the masses (of people and workloads).
This development has to include Private Clouds (which, contrary to what Werner states, don't necessitate corporations having to buy more hardware), but so much more. The true cloud is the collection of processors that exist virtually everywhere you turn, including mobile devices and sensors. That's where the definition of Cloud must go. In many ways it's returning Cloud to one of its progenitors, ubiquitous computing. By that point there won't be a Public, Private or Personal Cloud, there'll be "just" Cloud (or maybe some other term). Where your application is hosted will still remain important, but not because of any artificial reasons due to words such as 'private' or 'public'.
Saturday, July 31, 2010
Why JeAS?
I've been saying for a while that the last thing the industry needs is a return the days of vendor lock-in or somehow resetting the clock on the past 40+ years of middleware and rediscovering it all again. That's why I believe we have to leverage what we've got. Yes, it needs to morph and evolve, but we should start using (reusing) existing investments. Furthermore, I believe that if the ideals behind Cloud are to be realised then they must start by tackling existing workloads.
This is why the application server (no formal definition in this article) is the right vehicle for applications. It doesn't matter whether you just consider this to be Tomcat or your favourite application server (JBoss, of course!), whatever is hosting your important applications today and which your developers are comfortable with, that's the basis of your own PaaS requirements. Therefore, that has to be the basis for PaaS wherever you may want to deploy in the future. But even today you often find that your application server may be providing more functionality than you need, at least initially. Just considering Java Enterprise Edition for a moment, that's one of the reasons behind the introduction of profiles (which always reminded me of the Core Services Framework Bluestone/HP pushed back in 2000, driven by my friend and co-author Jon Maron).
So this is where I come from when I mention Just Enough Application Server: when deploying into a PaaS you really need support from your underlying application server to ensure that just enough is deployed and no more. Ideally this should be done automatically as your appliance is generated, but static is OK too. Throw in a bit of autonomous monitoring and management in case things change on the fly (application and object migration is a distinct possibility, so the underlying infrastructure/application server needs to be able to cope), and you've got yourself one super sleek PaaS.
This is why the application server (no formal definition in this article) is the right vehicle for applications. It doesn't matter whether you just consider this to be Tomcat or your favourite application server (JBoss, of course!), whatever is hosting your important applications today and which your developers are comfortable with, that's the basis of your own PaaS requirements. Therefore, that has to be the basis for PaaS wherever you may want to deploy in the future. But even today you often find that your application server may be providing more functionality than you need, at least initially. Just considering Java Enterprise Edition for a moment, that's one of the reasons behind the introduction of profiles (which always reminded me of the Core Services Framework Bluestone/HP pushed back in 2000, driven by my friend and co-author Jon Maron).
So this is where I come from when I mention Just Enough Application Server: when deploying into a PaaS you really need support from your underlying application server to ensure that just enough is deployed and no more. Ideally this should be done automatically as your appliance is generated, but static is OK too. Throw in a bit of autonomous monitoring and management in case things change on the fly (application and object migration is a distinct possibility, so the underlying infrastructure/application server needs to be able to cope), and you've got yourself one super sleek PaaS.
Thursday, July 29, 2010
Sunday, July 18, 2010
Has Java reached a point of inflection?
I've been wondering about this for a while. As far as languages go, Java has experienced a good run, remaining in the number 1 or 2 spot for the better part of a decade. The language came along as the Web was really growing from its fairly basic roots as it's arguable that Java influenced the development of it more than many languages. It certainly took over from the likes of Pascal and C/C++ in universities and as a result most new software developers have more than a passing knowledge of the language. Of course it's never been a perfect language (I'm not sure such a thing will ever exist, except for very domain-specific languages); it's also never been my favourite language (C++ still holds that position.) However, I think that Java has had an overall positive affect on the industry.
The Java platform (formerly known as J2EE) took a few years to emerge after the first releases of Java (aka Oak) and initially competed with CORBA in the non-Microsoft camp. Once it was clear that J2EE would dominate, CORBA tried to embrace it more closely (the CORBA Component Model, for instance) but was never quite the force it once was. However, J2EE wasn't perfect and a lot of that could be traced to its CORBA roots. It wasn't really until EE5 and EE6 that it managed to escape those shackles, but even despite those problems it dominated the industry. Of course it's precisely because of some of these issues that frameworks such as Spring sprang (!) up, but even then a lot of those deployments ran on J2EE to simplify development.
During its life Sun was a pretty good custodian of the language and the platform. Yes there were issues, particularly when Sun moved from being a neutral party to one that competed in the vendor landscape. But when you think about how one other company in a similar position handled a similar position in the 80's and 90's, things could have been a lot less open. In fact I'm certain that the success of both the language and the platform is due in no small part to this relative openness that Sun managed to juggle, despite their conflicts of interest. Towards the end of their life as an independent company, with the likes of the Apache License issue, it was obvious that Sun couldn't quite make the necessary leap that was/is required to continue the dominance of the language/platform (and you could argue to revitalize it).
And of course that brings us to the present and Oracle. If Java was at a point of inflection prior to their acquisition of Sun, it continues to be there today many months afterwards. Myself and others have argued that this is a good opportunity for Oracle to "do the right thing" and make good on their previous statements in that regard. To continue without such change risks pushing us over that tipping point, with fragmentation, vendor specific options and the end of the world as we know it. (OK, that last point is unlikely to happen!) As I've been saying around Cloud, we really shouldn't be looking at turning back the clock: it won't do anyone, including Oracle, any good.
Now maybe Oracle are considering making some positive announcements at the forthcoming JavaOne. I hope so. Continuing to make Java open and fast moving will benefit everyone, customers and vendors alike. And hopefully Oracle can make the leap that Sun never seemed to be able to make. What's that they say about Nixon and China?
The Java platform (formerly known as J2EE) took a few years to emerge after the first releases of Java (aka Oak) and initially competed with CORBA in the non-Microsoft camp. Once it was clear that J2EE would dominate, CORBA tried to embrace it more closely (the CORBA Component Model, for instance) but was never quite the force it once was. However, J2EE wasn't perfect and a lot of that could be traced to its CORBA roots. It wasn't really until EE5 and EE6 that it managed to escape those shackles, but even despite those problems it dominated the industry. Of course it's precisely because of some of these issues that frameworks such as Spring sprang (!) up, but even then a lot of those deployments ran on J2EE to simplify development.
During its life Sun was a pretty good custodian of the language and the platform. Yes there were issues, particularly when Sun moved from being a neutral party to one that competed in the vendor landscape. But when you think about how one other company in a similar position handled a similar position in the 80's and 90's, things could have been a lot less open. In fact I'm certain that the success of both the language and the platform is due in no small part to this relative openness that Sun managed to juggle, despite their conflicts of interest. Towards the end of their life as an independent company, with the likes of the Apache License issue, it was obvious that Sun couldn't quite make the necessary leap that was/is required to continue the dominance of the language/platform (and you could argue to revitalize it).
And of course that brings us to the present and Oracle. If Java was at a point of inflection prior to their acquisition of Sun, it continues to be there today many months afterwards. Myself and others have argued that this is a good opportunity for Oracle to "do the right thing" and make good on their previous statements in that regard. To continue without such change risks pushing us over that tipping point, with fragmentation, vendor specific options and the end of the world as we know it. (OK, that last point is unlikely to happen!) As I've been saying around Cloud, we really shouldn't be looking at turning back the clock: it won't do anyone, including Oracle, any good.
Now maybe Oracle are considering making some positive announcements at the forthcoming JavaOne. I hope so. Continuing to make Java open and fast moving will benefit everyone, customers and vendors alike. And hopefully Oracle can make the leap that Sun never seemed to be able to make. What's that they say about Nixon and China?
Friday, July 16, 2010
Red Hat/Newcastle University research day
Just back from the first official research day with Newcastle University and it was good. We covered a lot of short and long term research areas, including fault tolerance, scalability, event processing and policy definition/management. It all had a very heavy practical slant to it, which is event better given that we're hoping to see a lot of this come through into various JBoss/Red Hat projects and products eventually. All in all it was a good day and hopefully next time we'll take a couple of days so we can delve into things in much more detail. But for now I've got to go through my notes and start working on getting these R&D efforts underway!
Saturday, July 10, 2010
JUDCon Europe 2010
I've mentioned JUDCon before, and it's only been a couple of weeks since we had the very first event in Boston, which went very well. However, no time to relax since it was always our intention to have a couple of these a year and it's surprising how much lead time you need to make it happen. So we've decided on the dates and location for the European event: 7th and 8th of October in Berlin. Next up is figuring out the themes for the tracks and then we'll open up the call for presentations.
Thursday, July 08, 2010
Cloud-TM project
We're involved with a new EU Research project called Cloud-TM. Should be some good opportunities for long term research and development, whilst at the same time having strong industrial relevance. I'm looking forward to the kick-off meeting next week in Portugal!
Wednesday, June 23, 2010
DO'h!!
I've given countless presentations at many different events over the years and usually I'll either be on time or run slightly over time. On a few occasions I've finished early, but never have I finished 20 minutes early thinking that I was over time! I started my first JBoss World presentation this year at 10:20am and got into my stride. I think I got too caught up in what I was saying because at some point I looked down at my watch and noticed it was 10:50am. I obviously forgot what time I'd started because the only thought that ran through my mind was "Oh sh*t, where did the time go, I've only got 10 minutes left!" I managed to get through the remaining 10 slides or so by skipping some (fortunately they'll all be available on the web) and got into Q&A time, still blissfully unaware that I still had 20 minutes remaining. In fact it wasn't until I was walking away that someone pointed it out! So, if you were in that session I definitely apologise for rushing needlessly!
Tuesday, June 22, 2010
MW4SOC CFP
CALL FOR PAPERS
===============
+--------------------------------------------------------+
| 5th Middleware for Service-Oriented Computing (MW4SOC) |
| Workshop at the ACM/IFIP/USENIX Middleware Conference |
+--------------------------------------------------------+
Nov 29 Dec 3, 2010
Bangalore, India
http://www.dedisys.org/mw4soc10/
This workshop has its own ISBN and will be included in the ACM digital library.
Important Dates
===============
Paper submission: August 1, 2010
Author notification: September 15, 2010
Camera-ready copies: October 1, 2010
Workshop date: November 30, 2010
Call details
============
The initial visionary promise of Service Oriented Computing (SOC) was a world of cooperating services being loosely coupled to flexibly create dynamic business processes and agile applications that may span organisations and heterogeneous computing platforms but can nevertheless adapt quickly and autonomously to changes of requirements or context. Today, the influence of SOC goes far beyond the initial concepts of the original disciplines that spawned it. Many would argue that areas like business process modelling and management, Web2.0-style applications, data as a service, and even cloud computing emerge mainly due to the shift in paradigm towards SOC. Nevertheless, there is still a strong need to merge technology with an understanding of business processes and organizational structures.
While the immediate need of middleware support for SOC is evident, current approaches and solutions still fall short by primarily providing support for only the intra-enterprise aspect of SOC and do not sufficiently address issues such as service discovery, re-use, re-purpose, composition and aggregation support, service management, monitoring, and deployment and maintenance of large-scale heterogeneous infrastructures and applications. Moreover, quality properties (in particular dependability and security) need to be addressed not only by interfacing and communication standards, but also in terms of actual architectures, mechanisms, protocols, and algorithms. Challenges are the administrative heterogeneity, the loose coupling between coarse-grained operations and long-running interactions, high dynamicity, and the required flexibility during run-time. Recently, massive-scale and mobility were added to the challenges for Middleware for SOC.
These considerations also lead to the question to what extent service-orientation at the middleware layer itself is beneficial (or not). Recently emerging "Infrastructure as a Service" and "Platform as a Service" offerings, from providers like Amazon, Google, IBM, Microsoft, or from the open source community, support this trend towards cloud computing which provides corresponding services that can be purchased and consumed over the Internet. However, providing end-to-end properties and addressing cross-cutting concerns like dependability, security, and performance in cross-organizational SOC is a particular challenge and the limits and benefits thereof have still to be investigated.
The workshop consequently welcomes contributions on how specifically service oriented middleware can address the above challenges, to what extent it has to be service oriented by itself, and in particular how quality properties are supported.
Topics of interest
==================
* Architectures and platforms for Middleware for SOC.
* Core Middleware support for deployment, composition, and interaction.
* Integration of SLA (service level agreement) and/or technical policy support through middleware.
* Middleware support for service management, maintenance, monitoring, and control.
* Middleware support for integration of business functions and organizational structures into Service oriented Systems (SOS).
* Evaluation and experience reports of middleware for SOC and service oriented middleware.
Workshop co-chairs
===============
Karl M. Göschka (chair)
Schahram Dustdar
Frank Leymann
Helen Paik
Organizational chair
====================
Lorenz Froihofer, mw4soc@dedisys.org
Program committee
=================
Paul Brebner, NICTA (Australia)
Gianpaolo Cugola, Politecnico di Milano (Italy)
Walid Gaaloul, Institut Telecom (France)
Harald C. Gall, Universität Zürich (Switzerland)
Nikolaos Georgantas, INRIA (France)
Chirine Ghedira, Univ. of Lyon I (France)
Svein Hallsteinsen, SINTEF (Norway)
Yanbo Han, ICT Chinese Academy of Sciences (China)
Valérie Issarny, INRIA (France)
Mehdi Jazayeri, Università della Svizzera Italiana (Switzerland)
Bernd Krämer, University of Hagen (Germany)
Mark Little, JBoss (USA)
Heiko Ludwig, IBM Research (USA)
Hamid Reza Motahari Nezhad, HP Labs (USA)
Nanjangud C. Narendra, IBM Research (India)
Rui Oliveira, Universidade do Minho (Portugal)
Cesare Pautasso, Università della Svizzera Italiana (Switzerland)
Fernando Pedone, Università della Svizzera Italiana (Switzerland)
Jose Pereira, Universidade do Minho (Portugal)
Florian Rosenberg, Vienna University of Technology (Austria)
Giovanni Russello, Create-Net (Italy)
Regis Saint-Paul, CREATE-NET (Italy)
Dietmar Schreiner, Vienna University of Technology (Austria)
Bruno Schulze, National Lab for Scientific Computing (Brazil)
Francois Taiani, Lancaster University (UK)
Aad van Moorsel, University of Newcastle (UK)
Roman Vitenberg, University of Oslo (Norway)
Michael Zapf, Universität Kassel (Germany)
Liming Zhu, NICTA (Australia)
===============
+--------------------------------------------------------+
| 5th Middleware for Service-Oriented Computing (MW4SOC) |
| Workshop at the ACM/IFIP/USENIX Middleware Conference |
+--------------------------------------------------------+
Nov 29 Dec 3, 2010
Bangalore, India
http://www.dedisys.org/mw4soc10/
This workshop has its own ISBN and will be included in the ACM digital library.
Important Dates
===============
Paper submission: August 1, 2010
Author notification: September 15, 2010
Camera-ready copies: October 1, 2010
Workshop date: November 30, 2010
Call details
============
The initial visionary promise of Service Oriented Computing (SOC) was a world of cooperating services being loosely coupled to flexibly create dynamic business processes and agile applications that may span organisations and heterogeneous computing platforms but can nevertheless adapt quickly and autonomously to changes of requirements or context. Today, the influence of SOC goes far beyond the initial concepts of the original disciplines that spawned it. Many would argue that areas like business process modelling and management, Web2.0-style applications, data as a service, and even cloud computing emerge mainly due to the shift in paradigm towards SOC. Nevertheless, there is still a strong need to merge technology with an understanding of business processes and organizational structures.
While the immediate need of middleware support for SOC is evident, current approaches and solutions still fall short by primarily providing support for only the intra-enterprise aspect of SOC and do not sufficiently address issues such as service discovery, re-use, re-purpose, composition and aggregation support, service management, monitoring, and deployment and maintenance of large-scale heterogeneous infrastructures and applications. Moreover, quality properties (in particular dependability and security) need to be addressed not only by interfacing and communication standards, but also in terms of actual architectures, mechanisms, protocols, and algorithms. Challenges are the administrative heterogeneity, the loose coupling between coarse-grained operations and long-running interactions, high dynamicity, and the required flexibility during run-time. Recently, massive-scale and mobility were added to the challenges for Middleware for SOC.
These considerations also lead to the question to what extent service-orientation at the middleware layer itself is beneficial (or not). Recently emerging "Infrastructure as a Service" and "Platform as a Service" offerings, from providers like Amazon, Google, IBM, Microsoft, or from the open source community, support this trend towards cloud computing which provides corresponding services that can be purchased and consumed over the Internet. However, providing end-to-end properties and addressing cross-cutting concerns like dependability, security, and performance in cross-organizational SOC is a particular challenge and the limits and benefits thereof have still to be investigated.
The workshop consequently welcomes contributions on how specifically service oriented middleware can address the above challenges, to what extent it has to be service oriented by itself, and in particular how quality properties are supported.
Topics of interest
==================
* Architectures and platforms for Middleware for SOC.
* Core Middleware support for deployment, composition, and interaction.
* Integration of SLA (service level agreement) and/or technical policy support through middleware.
* Middleware support for service management, maintenance, monitoring, and control.
* Middleware support for integration of business functions and organizational structures into Service oriented Systems (SOS).
* Evaluation and experience reports of middleware for SOC and service oriented middleware.
Workshop co-chairs
===============
Karl M. Göschka (chair)
Schahram Dustdar
Frank Leymann
Helen Paik
Organizational chair
====================
Lorenz Froihofer, mw4soc@dedisys.org
Program committee
=================
Paul Brebner, NICTA (Australia)
Gianpaolo Cugola, Politecnico di Milano (Italy)
Walid Gaaloul, Institut Telecom (France)
Harald C. Gall, Universität Zürich (Switzerland)
Nikolaos Georgantas, INRIA (France)
Chirine Ghedira, Univ. of Lyon I (France)
Svein Hallsteinsen, SINTEF (Norway)
Yanbo Han, ICT Chinese Academy of Sciences (China)
Valérie Issarny, INRIA (France)
Mehdi Jazayeri, Università della Svizzera Italiana (Switzerland)
Bernd Krämer, University of Hagen (Germany)
Mark Little, JBoss (USA)
Heiko Ludwig, IBM Research (USA)
Hamid Reza Motahari Nezhad, HP Labs (USA)
Nanjangud C. Narendra, IBM Research (India)
Rui Oliveira, Universidade do Minho (Portugal)
Cesare Pautasso, Università della Svizzera Italiana (Switzerland)
Fernando Pedone, Università della Svizzera Italiana (Switzerland)
Jose Pereira, Universidade do Minho (Portugal)
Florian Rosenberg, Vienna University of Technology (Austria)
Giovanni Russello, Create-Net (Italy)
Regis Saint-Paul, CREATE-NET (Italy)
Dietmar Schreiner, Vienna University of Technology (Austria)
Bruno Schulze, National Lab for Scientific Computing (Brazil)
Francois Taiani, Lancaster University (UK)
Aad van Moorsel, University of Newcastle (UK)
Roman Vitenberg, University of Oslo (Norway)
Michael Zapf, Universität Kassel (Germany)
Liming Zhu, NICTA (Australia)
Saturday, June 19, 2010
JBossWorld and JUDCon
Off to Boston tomorrow for JBossWorld and JUDCon. The former is always a good event, but it's really the latter that I'm looking forward to the most, since it's the first ever one and it's taken us a while to organize. I'm already working on the European version that'll be coming in a few months time so hope to get some constructive feedback from the people to attend in the coming few days. And for my long flight to Boston I'm having a complete break and finishing reviewing someone's PhD thesis.
Tuesday, June 15, 2010
Classic paper
I came across this paper in my pile of printed materials from when I was doing my PhD. Great paper at the time and still applicable today. Well worth a read.
One of the best books I've ever read ...
I didn't realise that To Kill A Mockingbird was almost 50 years old! It's definitely one of the best books I read (while a teenager at school) and I go back to it every few years. It surprised me how much I loved the book from the start given that at the time sci fi and fantasy were the mainstays of my library. But from the age of 13 or 14 to the present day it's definitely something that I recall in vivid detail. And of course Gregory Peck makes a great (if not quite old enough) Atticus Finch. If you haven't read it then you should definitely do so!
Monday, June 14, 2010
When did we decide lock-in was good?
Many years ago the number of standards bodies around could be counted on the fingers of one hand. Back then, vendor lock-in was a reality and most of the time a certainty. But our industry matured and customers realised that sometimes being tied to a single vendor wasn't always a good thing. Over the next 20 years or so standards bodies have sprung up to cover almost every aspect of software. It's arguable that we now have too many standards bodies! Plus standards only work when they are based on experience and need: rushing to a standard too early in the cycle can result in something that isn't useful and may inhibit the real standard when it's needed eventually.
But I think, or at least thought, that most people, including developers and end-users, understand why standards are important. The move towards DCE, CORBA and J2EE illustrated this. Yes sometimes these standards weren't necessarily the easiest to use, but good standards should evolve to take this sort of thing into account, e.g., the differences between EE6 and J2EE 1.0 are significant in a number of areas not least of which is usability.
Furthermore a standard needs to be backed by multiple independent vendors and ideally come from an independent standards body (OK Java doesn't fit this last point, but maybe that will change some day.) So it annoys me at times when I hear the term open standard used to refer to something that may be used by many people but still only comes from a single vendor, or perhaps a couple, but certainly doesn't fit any of the generally accepted meanings of the term.
And yes this rant has been triggered by some of the recent announcements around Cloud. Maybe it's simply due to where we are in the hype curve, because I can't believe that developers and users are going to throw away the maturity of thought and process that have been built up over the past few decades concerning standards (interoperability, portability etc.) Or have the wool pulled over their eyes. Of course we need to experiment and do research, but let's not ignore the fact that there are real standards out there that either are applicable today or could be evolved to applicability by a concerted effort by many collaborating vendors and communities. You don't want to deploy your applications into a Cloud only to find that you can't get them back or can't integrate them with a partner or customer who may have chosen a different Cloud offering. That's not a Cloud, that's a prison.
But I think, or at least thought, that most people, including developers and end-users, understand why standards are important. The move towards DCE, CORBA and J2EE illustrated this. Yes sometimes these standards weren't necessarily the easiest to use, but good standards should evolve to take this sort of thing into account, e.g., the differences between EE6 and J2EE 1.0 are significant in a number of areas not least of which is usability.
Furthermore a standard needs to be backed by multiple independent vendors and ideally come from an independent standards body (OK Java doesn't fit this last point, but maybe that will change some day.) So it annoys me at times when I hear the term open standard used to refer to something that may be used by many people but still only comes from a single vendor, or perhaps a couple, but certainly doesn't fit any of the generally accepted meanings of the term.
And yes this rant has been triggered by some of the recent announcements around Cloud. Maybe it's simply due to where we are in the hype curve, because I can't believe that developers and users are going to throw away the maturity of thought and process that have been built up over the past few decades concerning standards (interoperability, portability etc.) Or have the wool pulled over their eyes. Of course we need to experiment and do research, but let's not ignore the fact that there are real standards out there that either are applicable today or could be evolved to applicability by a concerted effort by many collaborating vendors and communities. You don't want to deploy your applications into a Cloud only to find that you can't get them back or can't integrate them with a partner or customer who may have chosen a different Cloud offering. That's not a Cloud, that's a prison.
Tuesday, June 01, 2010
Reading and not understanding
I can't remember when I first read The Innovator's Dilemma, but it was only a few years back when I first read The Tipping Point. Both are good books and compliment each other. However, some interesting conversations over the long weekend here made it clear to me that some people who say they've read them either haven't or have failed to understand them. Now it's one thing when friends fall into this category, but it's a completely different thing when it's business leaders quoting them as gospel to back up dubious choices. No individual's names and no company names, but it turns out it's quite common in our industry. Probably elsewhere too. Maybe there are some texts (books, papers etc.) where you should be forced to pass a test before being allowed to refer to them.
Thursday, May 27, 2010
TweetDeck
Many of the folks at the Thinking Digital conference I'm attending are using Macs and running TweetDeck. So while sat here listening to yet another great talk, I decided to install it. Wow! I've been dabbling with twitter for a while but now I can see that others have been tweeting at me and I've not seen them until now. Apologies to everyone to which that applies! Now I may start using twitter more.
My next phone?
Last October I decided to upgrade my phone. At the time the iPhone wasn't available through my provider, so I went with the HTC Hero, running Android. It got good write-ups and I've had an interest in Android for a while. Since getting it I've enjoyed it, but it's not a match for the iPhone. The hardware is too slow, it's not as responsive, the store isn't as good, you can't (couldn't) store apps on the SD card, and there are a few other minor issues.
But I like the phone, so am glad I got it. That is until recently. My phone is running Android 1.5, and over the past 6 months we've seen 2.1 and now 2.2 come out. Now of course I could go and install these on my phone, but the "preferred" route is to get the official release from HTC, which would include their SenseUI, which is a great selling point over other Android phones and the iPhone. Yet to date there is still no official release from HTC for 2.1 and there may never be a 2.2 release for it.
Now I understand that technology can get dated and old things can't always run the new stuff. But just over 6 months after I got it I refuse to believe that my phone is outdated! Maybe this is an HTC specific issue and they really just need to get their act together. But it appears that Google would disagree and that fragmentation is inevitable. If it is then I am concerned for the future of Android.
For now I'll wait to see what comes from HTC. But when my contracts up I will likely move to an iPhone and mark my journey with Android down to experience.
But I like the phone, so am glad I got it. That is until recently. My phone is running Android 1.5, and over the past 6 months we've seen 2.1 and now 2.2 come out. Now of course I could go and install these on my phone, but the "preferred" route is to get the official release from HTC, which would include their SenseUI, which is a great selling point over other Android phones and the iPhone. Yet to date there is still no official release from HTC for 2.1 and there may never be a 2.2 release for it.
Now I understand that technology can get dated and old things can't always run the new stuff. But just over 6 months after I got it I refuse to believe that my phone is outdated! Maybe this is an HTC specific issue and they really just need to get their act together. But it appears that Google would disagree and that fragmentation is inevitable. If it is then I am concerned for the future of Android.
For now I'll wait to see what comes from HTC. But when my contracts up I will likely move to an iPhone and mark my journey with Android down to experience.
Tuesday, May 25, 2010
Paul on PaaS
It's been a while since Paul Fremantle and I caught up, but it's good to see that we're still thinking along similar lines on a number of topics. This time it's PaaS and vendor lock-in. In fact there's a growing concern that this is something that may not be obvious to users given the amount of hype that's builing in the atomosphere.
Thursday, May 20, 2010
Interesting Cloud announcement
It's another week and we have another announcement around Cloud, this time from Google. Quite interesting but the underlying message is pretty much what I said in an earlier entry: if you want to move to this type of cloud then you'd better be expecting to re-code some or all of your application. Oh joy. Here we go again! Even the tag line of "... write once, deploy anywhere" is a lot of smoke-and-mirrors. "Deploy to any one of four vendor specific Clouds, but don't forget about the lock-in potential there" would be more appropriate.
Come on guys. We really cannot afford to reinvent the world again. So as a prospective user, unless you really aren't interested in leveraging your existing investments in software and people, this looks like another non-starter. Maybe 2 out of 10 for marketing buzz, but 0 for effort.
Come on guys. We really cannot afford to reinvent the world again. So as a prospective user, unless you really aren't interested in leveraging your existing investments in software and people, this looks like another non-starter. Maybe 2 out of 10 for marketing buzz, but 0 for effort.
Sunday, May 16, 2010
Duane tells it like it is
I've known Duane "Chaos" Nickull for many years. He's a great guy, a good friend and knows his stuff on a range of technical and not so technical (excellent musician, for instance). His most recent post made me laugh out loud! A good entry to say the least.
Friday, May 07, 2010
Rich Frisbie
I met some really great guys when Bluestone acquired Arjuna Solutions. It was a great culture and family, one with which I'm pleased to have been involved. It's therefore very sad to hear that one of the friends I made during those years, Rich Frisbie, has passed on so suddenly and at such a young age. I wish his family all the best and will have a silent toast to Rich. 'nuff said.
Thursday, May 06, 2010
The Chewbacca Defense
I use it all the time when I run into things that just don't make sense. It's a light-hearted way of telling people they need to think more!
Tuesday, May 04, 2010
Monday, May 03, 2010
Plan 9 operating system
While reading about cult movies I was suddenly reminded about the Plan 9 operating system. It came out while we were deep into the original Arjuna development and a heavy user of Sun Sparcs, so naturally it was something we had to investigate (along with the Spring operating system)! It had some interesting ideas, and with distribution at its heart it was very relevant to what we were researching (the University was the home of the Newcastle Connection, with some similar aims from many years before.)
Anyway, it was an interesting time and I'm pleasantly surprised to learn that there's still work going on into Plan 9, twenty something years after it began. Very cool!
Anyway, it was an interesting time and I'm pleasantly surprised to learn that there's still work going on into Plan 9, twenty something years after it began. Very cool!
Friday, April 30, 2010
Cloudy days ahead for applications?
I've been thinking a lot recently about the Cloud and its potential as a disruptive technology. I got to wondering about how we arrived at where we are today, and one of my favourite books sprang to mind as a way of articulating that, at least to myself.
To misuse HG Wells ever so slightly, "No one would have believed in the last years of the first decade of the twenty first century that the world of enterprise software was being watched keenly and closely by intelligences greater than man's and yet as mortal as his own; that as men busied themselves about their various concerns they were scrutinised and studied, perhaps almost as narrowly as a man with a microscope might scrutinise the transient creatures that swarm and multiply in a drop of water. With infinite complacency men went to and fro over this globe about their little affairs, serene in their assurance of their empire over middleware. It is possible that the infusoria under the microscope do the same. No one gave a thought to some of the relatively new companies as sources of danger to their empires, or thought of them only to dismiss the idea that they could have a significant impact on the way in which applications could be developed and deployed. Yet across the gulf of cyberspace, minds that are to our minds as ours are to those of the beasts that perish, intellects vast and cool and unsympathetic, regarded enterprise applications with envious eyes, and slowly and surely drew their plans against us."
Of course the rise of Cloud practitioners and vendors was in no way as malevolent as the Martians, but the potential impact on the middleware industry may be no less dramatic (or drastic). Plus there was a level of ignorance (arrogance) on behalf of some middleware vendors against the likes of Amazon and Google just as the Victorians believed themselves unassailable and masters of all they surveyed.
But there are still a number of uncertainties around where this new wave is heading. One of them is exactly what does this mean for applications? On the one hand there are those who believe applications and their supporting infrastructure (middleware) must be rewritten from scratch. Then there are others who believe existing applications must be supported. I've said before that I believe as an industry we need to be leveraging what we've been developing for the past few decades. Of course some things need to change and evolve, but if you look at what most people who are using or considering using Cloud expect, it's to be able to take their existing investments and Cloudify them.
This shouldn't come as a surprise. If you look at what happened with the CORBA-to-J2EE transition, or the original reason for the development of Web Services, or even how a lot of the Web works, they're all examples of reusing existing investments to one degree or another. Of course over the years the new (e.g., J2EE) morphed away from the old, presenting other ways in which to develop applications to take advantage of the new and different capabilities those platforms offered. And that will happen with Cloud too, as it evolves over the next few years. But initially, if we're to believe that there are economic benefits to using the Cloud, then they have to support existing applications (of which there are countless), frameworks (of which there are many) and skill sets of the individuals who architect them, implement them and manage them (countless again). It's outsourcing after all.
To misuse HG Wells ever so slightly, "No one would have believed in the last years of the first decade of the twenty first century that the world of enterprise software was being watched keenly and closely by intelligences greater than man's and yet as mortal as his own; that as men busied themselves about their various concerns they were scrutinised and studied, perhaps almost as narrowly as a man with a microscope might scrutinise the transient creatures that swarm and multiply in a drop of water. With infinite complacency men went to and fro over this globe about their little affairs, serene in their assurance of their empire over middleware. It is possible that the infusoria under the microscope do the same. No one gave a thought to some of the relatively new companies as sources of danger to their empires, or thought of them only to dismiss the idea that they could have a significant impact on the way in which applications could be developed and deployed. Yet across the gulf of cyberspace, minds that are to our minds as ours are to those of the beasts that perish, intellects vast and cool and unsympathetic, regarded enterprise applications with envious eyes, and slowly and surely drew their plans against us."
Of course the rise of Cloud practitioners and vendors was in no way as malevolent as the Martians, but the potential impact on the middleware industry may be no less dramatic (or drastic). Plus there was a level of ignorance (arrogance) on behalf of some middleware vendors against the likes of Amazon and Google just as the Victorians believed themselves unassailable and masters of all they surveyed.
But there are still a number of uncertainties around where this new wave is heading. One of them is exactly what does this mean for applications? On the one hand there are those who believe applications and their supporting infrastructure (middleware) must be rewritten from scratch. Then there are others who believe existing applications must be supported. I've said before that I believe as an industry we need to be leveraging what we've been developing for the past few decades. Of course some things need to change and evolve, but if you look at what most people who are using or considering using Cloud expect, it's to be able to take their existing investments and Cloudify them.
This shouldn't come as a surprise. If you look at what happened with the CORBA-to-J2EE transition, or the original reason for the development of Web Services, or even how a lot of the Web works, they're all examples of reusing existing investments to one degree or another. Of course over the years the new (e.g., J2EE) morphed away from the old, presenting other ways in which to develop applications to take advantage of the new and different capabilities those platforms offered. And that will happen with Cloud too, as it evolves over the next few years. But initially, if we're to believe that there are economic benefits to using the Cloud, then they have to support existing applications (of which there are countless), frameworks (of which there are many) and skill sets of the individuals who architect them, implement them and manage them (countless again). It's outsourcing after all.
Thursday, April 29, 2010
SOA metrics post
I wrote this a while ago and someone forgot to let me know it had finally been published. Better late than never I suppose.
Wednesday, April 28, 2010
A Platform of Services
Back in the 70's and 80's, when multi-threaded processes and languages were still very much on the research agenda, distributed systems were developed based on a services architecture, with services as the logical unit of deployment, replication and fault containment. If you wanted to service multiple clients concurrently then you'd typically fire off one server instance per client. Of course your servers could share information between themselves if necessary to give the impression of a single instance, using sockets, shared memory, disk etc.
Now although those architectures were almost always forced on us by limitations in the operating systems, languages and hardware at the time, it also made a lot of sense to have the server as the unit of deployment, particularly from the perspectives of security and fault tolerance. So even when operating systems and languages started to support multi-threading, and we started to look at standardising distributed systems with ANSA, DCE and CORBA, the service remained a core part of the architecture. For example, CORBA has the concept of Core Services, such as persistence, transactions, security and although the architecture allows them to be colocated in the same process as each other or the application for performance reasons, many implementations continued to support them as individual services/processes.
Yes there are trade-offs to be made between, say, performance and fault tolerance. Tolerating the crash of a thread within a multi-threaded process is often far more difficult than tolerating the crash of an individual process. In fact a rogue thread could bring down other threads or prevent them from making forward progress, preventing the process (server) from continuing to act on behalf of multiple clients. However, invoking services as local instances (e.g., objects) in the same process is a lot quicker than if you have to resort to message passing, whether or not based on RPC.
However, over the past decade or so as threading became a standard part of programming languages and processor performance increased more rapidly than network speeds, many distributed systems implementations moved to colocating services as the default, with the distributed aspect really only applying to the interactions between the business client and the business service. In some cases this was the only way in which the implementation worked, i.e., putting core infrastructure services in separate processes and optionally on different machines was simply no longer an option.
Of course the trade-offs I mentioned kicked in and were made for you (enforced on you) by the designers, often resulting in monolithic implementations. With the coining of the SOA term and the rediscovery of services and loose coupling, many started to see services as beneficial to an architect despite any initial issues with performance. As I've said before, SOA implementations based on CORBA have existed for many years, although of course you need more than just services to have SOA.
Some distributed systems implementations that embraced SOA started to move back to a service-oriented architecture internally too. Others were headed in that direction anyway. Still others stayed where they were in their colocated, monolithic worlds. And then came Cloud. I'm hoping that as an industry we can leverage most of what we've been implementing over the past few decades, but what architecture is most conducive to being able to take advantage of the benefits Cloud may bring? I don't think we're quite there yet to be able to answer that question in its entirety, but I do believe that it will be based on an architecture that utilises services. So if we're discussing frameworks for developing and deploying applications to the cloud we need to be thinking that those core capabilities needed by the application (transactions, security, naming etc.) will be remote, i.e., services, and they may even be located on completely different cloud infrastructures.
Now this may be obvious to some, particularly given discussions around PaaS and SaaS, but I'm not so sure everyone agrees, given what I've seen, heard and read over the past months. What I'm particularly after is a services architecture that CORBA espoused but which many people overlooked or didn't realise was possible, particularly if they spoke with the large CORBA vendors at the time: an architecture where the services could be from heterogeneous vendors as well as being based on different implementation languages. This is something that will be critical for the cloud, as vendors and providers come and go, and applications need to choose the right service implementation dynamically. The monolithic approach won't work here, particularly if those services may need to reside on completely different cloud infrastructures (cf CORBA ORB). I'm hoping we don't need to spend a few years trying to shoehorn monoliths in to this only to have that Eureka moment!
The lack of standards in the area will likely impact interoperability and portability in the short term, but once standards do evolve those issues should be alleviated somewhat. The increasing use of REST should help immediately too though.
Now although those architectures were almost always forced on us by limitations in the operating systems, languages and hardware at the time, it also made a lot of sense to have the server as the unit of deployment, particularly from the perspectives of security and fault tolerance. So even when operating systems and languages started to support multi-threading, and we started to look at standardising distributed systems with ANSA, DCE and CORBA, the service remained a core part of the architecture. For example, CORBA has the concept of Core Services, such as persistence, transactions, security and although the architecture allows them to be colocated in the same process as each other or the application for performance reasons, many implementations continued to support them as individual services/processes.
Yes there are trade-offs to be made between, say, performance and fault tolerance. Tolerating the crash of a thread within a multi-threaded process is often far more difficult than tolerating the crash of an individual process. In fact a rogue thread could bring down other threads or prevent them from making forward progress, preventing the process (server) from continuing to act on behalf of multiple clients. However, invoking services as local instances (e.g., objects) in the same process is a lot quicker than if you have to resort to message passing, whether or not based on RPC.
However, over the past decade or so as threading became a standard part of programming languages and processor performance increased more rapidly than network speeds, many distributed systems implementations moved to colocating services as the default, with the distributed aspect really only applying to the interactions between the business client and the business service. In some cases this was the only way in which the implementation worked, i.e., putting core infrastructure services in separate processes and optionally on different machines was simply no longer an option.
Of course the trade-offs I mentioned kicked in and were made for you (enforced on you) by the designers, often resulting in monolithic implementations. With the coining of the SOA term and the rediscovery of services and loose coupling, many started to see services as beneficial to an architect despite any initial issues with performance. As I've said before, SOA implementations based on CORBA have existed for many years, although of course you need more than just services to have SOA.
Some distributed systems implementations that embraced SOA started to move back to a service-oriented architecture internally too. Others were headed in that direction anyway. Still others stayed where they were in their colocated, monolithic worlds. And then came Cloud. I'm hoping that as an industry we can leverage most of what we've been implementing over the past few decades, but what architecture is most conducive to being able to take advantage of the benefits Cloud may bring? I don't think we're quite there yet to be able to answer that question in its entirety, but I do believe that it will be based on an architecture that utilises services. So if we're discussing frameworks for developing and deploying applications to the cloud we need to be thinking that those core capabilities needed by the application (transactions, security, naming etc.) will be remote, i.e., services, and they may even be located on completely different cloud infrastructures.
Now this may be obvious to some, particularly given discussions around PaaS and SaaS, but I'm not so sure everyone agrees, given what I've seen, heard and read over the past months. What I'm particularly after is a services architecture that CORBA espoused but which many people overlooked or didn't realise was possible, particularly if they spoke with the large CORBA vendors at the time: an architecture where the services could be from heterogeneous vendors as well as being based on different implementation languages. This is something that will be critical for the cloud, as vendors and providers come and go, and applications need to choose the right service implementation dynamically. The monolithic approach won't work here, particularly if those services may need to reside on completely different cloud infrastructures (cf CORBA ORB). I'm hoping we don't need to spend a few years trying to shoehorn monoliths in to this only to have that Eureka moment!
The lack of standards in the area will likely impact interoperability and portability in the short term, but once standards do evolve those issues should be alleviated somewhat. The increasing use of REST should help immediately too though.
Tuesday, April 20, 2010
Some things really shouldn't be changed
There are some things that shouldn't change, no matter how good an idea it may seem at first glance. For instance, the original Coke formula, the name of Coco Pops, and remaking the Searchers. Then again there are some things that really do benefit from a revision, such as Battlestar Galactica or the laws of gravity.
So it was with some trepidation that I heard they were going to remake The Prisoner. The original is a classic of 1960's TV that stood the test of time. I remember watching it at every opportunity while growing up (in the days when we only had 4 TV channels, so repeats were few and far between). Patrick McGoohan was The Prisoner and while the stories were often "out there", the series had this pulling power that made it unmissable.
I wondered how anyone could remake it and capture the essence of the original show? But I decided to be open minded and sat down the other night to watch the first episode of the new series. Afterwards the first thought I had was "I'll never get that hour back again!" As a remake, it was terrible. As a stand-alone series, it was probably passable.
It looks like another good idea bites the dust. I'll be taking the series off my Sky+ reminder now and if I'm stuck for something to do at that time I'll either watch some wood warp or maybe watch some paint dry: both would be far more stimulating activities!
So it was with some trepidation that I heard they were going to remake The Prisoner. The original is a classic of 1960's TV that stood the test of time. I remember watching it at every opportunity while growing up (in the days when we only had 4 TV channels, so repeats were few and far between). Patrick McGoohan was The Prisoner and while the stories were often "out there", the series had this pulling power that made it unmissable.
I wondered how anyone could remake it and capture the essence of the original show? But I decided to be open minded and sat down the other night to watch the first episode of the new series. Afterwards the first thought I had was "I'll never get that hour back again!" As a remake, it was terrible. As a stand-alone series, it was probably passable.
It looks like another good idea bites the dust. I'll be taking the series off my Sky+ reminder now and if I'm stuck for something to do at that time I'll either watch some wood warp or maybe watch some paint dry: both would be far more stimulating activities!
Monday, April 19, 2010
Volcano activity spoils conference
I was an invited speaker as the first DoD sponsored SOA Symposium last year and really enjoyed it. I got invited to this year's event and was going to speak on SOA and REST. I think it would have been as good as last year (will be as good as last year), but unfortunately a certain volcano in Iceland has meant that my flights have been canceled. So I'll have to watch from afar and hope that the troubles in the sky clear up soon. My best wishes go to the event organizers and I hope that next year presents an opportunity to attend for a second time.
Tuesday, March 23, 2010
A sad day
I just heard that Robin Milner has died. I met him a couple of times and found him approachable, very articulate and friendly. A great loss. My sympathies to his family.
Friday, March 19, 2010
Meetings and work
Irresective of whatever title(s) I may have, I'm a software engineer. For well over 30 years I've been cutting code; whether it's games (my very first program was battleships on a paper-tape machine) or transaction systems, I get a lot of enjoyment out of it. I also love learning new programming languages (something that strangely enough doesn't translate to human languages, where I'm not so strong.) I don't think I'm happier than when I'm coding.
Now over the years I've spent a lot of time in various meetings, including standards efforts, project planning, customer engagements, architecture discussions etc. Some of them are also implicit, e.g., Stuart and I would talk a lot, often while coding, because we shared an office for over a decade. But most of them are explicit and you don't get a chance to code at the same time (paying customers tend to want your full attention, for instance!) I've been at a project planning meeting today in Brussels and although I could code on the evenings while in the hotel, it's not a lot of time to spend.
In general the majority of my days revolve around talking and writing documents. Most days I'll spend in meetings, either physically in a room with others or on a phone with them. Time for coding is limited during the day, so again it tends to happen late evenings, very early in the morning, weekends and on holiday. At times I believe I need to do it to keep myself grounded (and sane). And it doesn't have to be work coding that lets me energize my batteries (I think all engineers have their pet projects!)
When I was coding full-time it was easy for me to look back and see what I'd been doing with my time. These days it's not quite so easy as there is often no concrete evidence of what's been done on a daily basis. Of course that's a very short sighted view and when I look across the months/years and see the results those meetings generated it's very different. But for an engineer who has been coding for 75% of his life, it's hard at times when I'm sat in yet another meeting with a terminal, emacs and a programming language calling to me from a few inches away. Yes, meetings are important for what I have to do today, but my heart will always be elsewhere. Now back to some coding!
Now over the years I've spent a lot of time in various meetings, including standards efforts, project planning, customer engagements, architecture discussions etc. Some of them are also implicit, e.g., Stuart and I would talk a lot, often while coding, because we shared an office for over a decade. But most of them are explicit and you don't get a chance to code at the same time (paying customers tend to want your full attention, for instance!) I've been at a project planning meeting today in Brussels and although I could code on the evenings while in the hotel, it's not a lot of time to spend.
In general the majority of my days revolve around talking and writing documents. Most days I'll spend in meetings, either physically in a room with others or on a phone with them. Time for coding is limited during the day, so again it tends to happen late evenings, very early in the morning, weekends and on holiday. At times I believe I need to do it to keep myself grounded (and sane). And it doesn't have to be work coding that lets me energize my batteries (I think all engineers have their pet projects!)
When I was coding full-time it was easy for me to look back and see what I'd been doing with my time. These days it's not quite so easy as there is often no concrete evidence of what's been done on a daily basis. Of course that's a very short sighted view and when I look across the months/years and see the results those meetings generated it's very different. But for an engineer who has been coding for 75% of his life, it's hard at times when I'm sat in yet another meeting with a terminal, emacs and a programming language calling to me from a few inches away. Yes, meetings are important for what I have to do today, but my heart will always be elsewhere. Now back to some coding!
Monday, March 15, 2010
Dr Professor or Professor Dr?
Since I officially left The University I've been a Research Fellow (I was in my late 20's when the photo was taken), along with Stuart. The role involves a range of things such as being on PhD thesis committees, helping with research, co-authoring papers etc. It's something from which I've received a lot of enjoyment and satisfaction, and I think it's an important counterpoint to my normal day job.
It's a position that I think is a privilege. So you can imagine how I felt today when they made me a Professor! Despite the number of years I spent at The University it's a position I never thought I'd achieve, particularly after I left to pursue a career path through the likes of Arjuna (x2), Bluestone, HP, JBoss and Red Hat. I definitely need to thank Santosh, who over the past 20+ years has been my boss, colleague and friend. He's also epitomised for me what it means to be a Professor, something which I think has positively impacted who I am to this day.
It's a position that I think is a privilege. So you can imagine how I felt today when they made me a Professor! Despite the number of years I spent at The University it's a position I never thought I'd achieve, particularly after I left to pursue a career path through the likes of Arjuna (x2), Bluestone, HP, JBoss and Red Hat. I definitely need to thank Santosh, who over the past 20+ years has been my boss, colleague and friend. He's also epitomised for me what it means to be a Professor, something which I think has positively impacted who I am to this day.
QCon London slides on line
I just got confirmation that the QCon slides are on line now, with mine available at this location.
Sunday, March 14, 2010
Every Cloud has a silver lining
I've already stated that I think there's a lot Cloud can learn from the past and yet there is also more evolution of current approaches to come where Cloud is concerned. However, that doesn't mean that adding a little bit of Cloud pixie dust to everything immediately makes it better or more relevant.
I'm one of the reviewers for a special journal issue on Engineering Middleware for Service-Oriented Computing. Unfortunately several of the papers I've read seemed to be under the assumption that adding the words/terms 'Cloud' or 'SOA' to their works would make them relevant, when in fact it had almost the opposite effect. It didn't work with other technological waves such as Web Services or Java. If your work is relevant to a specific technology or a range of technologies then it should be obvious from the start and attempts to artificially push the reader into joining mental dots or making "intuitive leaps" reflect poorly on the author.
I'm one of the reviewers for a special journal issue on Engineering Middleware for Service-Oriented Computing. Unfortunately several of the papers I've read seemed to be under the assumption that adding the words/terms 'Cloud' or 'SOA' to their works would make them relevant, when in fact it had almost the opposite effect. It didn't work with other technological waves such as Web Services or Java. If your work is relevant to a specific technology or a range of technologies then it should be obvious from the start and attempts to artificially push the reader into joining mental dots or making "intuitive leaps" reflect poorly on the author.
Saturday, March 13, 2010
QCon London 2010 update
I mentioned earlier that I was presenting at QCon London. Well I had a good time at QCon giving a presentation which was basically about lessons learnt while developing, using and selling transaction systems. Hopefully the slides will go up soon, but it seemed to go down well. According to the organizers, it has the second highest audience for the track, which given that people were sitting on the floor, didn't surprise me. Everyone seemed to enjoy the talk and I hope they got as much out of it as I got putting it together.
What I did realise as I was giving the session was that I really have enough material for 4 or 5 presentations. Therefore, I may do some deep dives into specific aspects of the current presentation. I may even realise these as blog entries as well.
What I did realise as I was giving the session was that I really have enough material for 4 or 5 presentations. Therefore, I may do some deep dives into specific aspects of the current presentation. I may even realise these as blog entries as well.
Thursday, March 04, 2010
QCon London 2010
I'm going to QCon London again and speaking about one of my favourite subjects. If you're around then come by and say hello. Which reminds me ... I need to write the presentation!
Red Hat and Newcastle University
It's taken a while to get all of the pieces of the puzzle together, but we've now got a formal relationship with the University. As one of my friends would say: Onward!
Monday, March 01, 2010
Saturday, February 13, 2010
Cloud as the death of middleware?
Over the last few months I've been hearing and reading people suggesting that the Cloud ([fill in your own definition]) is either the death of middleware, or the death of "traditional" middleware. What this tells me is that those individuals don't understand the concepts behind middleware ("traditional" or not). In some ways that's not too hard to understand given the relatively loose way in which we use the term 'middleware'. Often within the industry middleware is something we all understand when we're talking about it, but it's not something that we tend to be able to identify clearly: what one person considers application may be another's middleware component. In my world, middleware is basically anything that exists above the operating system and below the application (I think the fact that these days we tend to ignore the old ISO 7 Layer stack is a real shame because that can only help such a definition.)
But anyway, middleware has existed in one form or another for decades. There are obvious examples of "it" including DCE, CORBA, JEE and .NET, but then some other not so obvious ones such as the Web: yes, the WWW is a middleware system, including most of the usual suspects such as naming, addressing, security, message passing etc. And yes, over the past few years I've heard people suggest that the Web is also the death of middleware. For the same reasons that Cloud isn't its deathknell, neither was the Web: middleware is ubiquitous and all but the most basic applications need "it", where "it" can be a complete middleware infrastructure such as JEE or just some sub-components, such as security or transactions. Now this doesn't mean that what consitutes middleware for the Cloud is exactly what we've all been using over the past few years. That would be as crazy a suggestion as assuming CORBA was the ultimate evolution of middleware or that Web Services architecture would replace JEE or .NET (something which some people once believed). Middleware today is an evolution of middleware from the 1960's and I'm sure it will continue to evolve as the areas to which we apply it change and evolve. I think it is also inevitable that Cloud will evolve, as we decide precisely what it is that we want it to do (as well as what 'it' is) based upon both positive and negative experiences of what's out there currently. (That's why we have the Web today, after all.)
Implementations such as Google App Engine are interesting toys at the moment, offering the ability to deploy relatively simple applications that may be based on cut-down APIs with which people are familiar in a non-Cloud environment. But I'm fairly sure that if you consider what consistutes middleware for the vast majority of applications, the offerings today are inadequate. Now maybe the aim is for people who require services such as security, transactions, etc. to reimplement them in such a way that they can be deployed on-demand to the types of Cloud infrastructures around today. If that is the case then it does seem to solve the problem (bare minimum capabilities available initially) but I take issue with that approach too: as an industry we simply cannot afford to revisit the (almost) NIH syndromes that have scarred the evolution of middleware and software engineering in general over the past 4 decades. For instance, when Java came on the scene there was a rush to reimplement security, messaging, transactions etc. in this new, cool language. The industry and its users spent many years revisiting concepts, capabilities, services etc. that existed elsewhere and often had done so reliably and efficiently for decades, just so we could add the "Java" badge to them. OK, some things really did need reimplementing and rethinking (recall what I said about evolution), but certainly not as much as was reworked. This is just one example though: if you look back at DCE, CORBA, COM/DCOM, .NET etc. you'll see it has happened before in a very Battlestar Galactica-like situation.
Therefore, if we have to reimplement all of the core capabilities that have been developed over the years (even just this century) then we are missing the point and it really will take us another decade to get to where we need to be. However, don't read into this that I believe that current middleware solutions are perfect today either for Cloud applications or non-Cloud applications. We've made mistakes. But we've also gotten more things right than wrong. Plus if you look at any enterprise middleware stack, whether from the 21st or 20th centuries, you'll see many core capabilities or services are common throughout. Cloud does not change that. In my book it's about building on what we've done so far, making it "Cloud aware" (whatever that turns out to mean), and leveraging existing infrastructural investments both in terms of hardware and software (and maybe even peopleware).
Of course there'll be new things that we'll need to add to the infrastructure for supporting Cloud applications, just as JEE doesn't match CORBA exactly, or CORBA doesn't match DCE etc. There may be new frameworks and languages involved too. But this new Cloud wave (hmmm, mixing metaphors there I think) needs to build on what we've learned and developed rather than being an excuse to reimplement or remake the software world in "our" own image. That would be far too costly in time and effort, and I have yet to be convinced that it would result in anything substantially better than the alternative approach. If I were to try to sum up what I'm saying here it would be: Evolution Rather Than Revolution!
But anyway, middleware has existed in one form or another for decades. There are obvious examples of "it" including DCE, CORBA, JEE and .NET, but then some other not so obvious ones such as the Web: yes, the WWW is a middleware system, including most of the usual suspects such as naming, addressing, security, message passing etc. And yes, over the past few years I've heard people suggest that the Web is also the death of middleware. For the same reasons that Cloud isn't its deathknell, neither was the Web: middleware is ubiquitous and all but the most basic applications need "it", where "it" can be a complete middleware infrastructure such as JEE or just some sub-components, such as security or transactions. Now this doesn't mean that what consitutes middleware for the Cloud is exactly what we've all been using over the past few years. That would be as crazy a suggestion as assuming CORBA was the ultimate evolution of middleware or that Web Services architecture would replace JEE or .NET (something which some people once believed). Middleware today is an evolution of middleware from the 1960's and I'm sure it will continue to evolve as the areas to which we apply it change and evolve. I think it is also inevitable that Cloud will evolve, as we decide precisely what it is that we want it to do (as well as what 'it' is) based upon both positive and negative experiences of what's out there currently. (That's why we have the Web today, after all.)
Implementations such as Google App Engine are interesting toys at the moment, offering the ability to deploy relatively simple applications that may be based on cut-down APIs with which people are familiar in a non-Cloud environment. But I'm fairly sure that if you consider what consistutes middleware for the vast majority of applications, the offerings today are inadequate. Now maybe the aim is for people who require services such as security, transactions, etc. to reimplement them in such a way that they can be deployed on-demand to the types of Cloud infrastructures around today. If that is the case then it does seem to solve the problem (bare minimum capabilities available initially) but I take issue with that approach too: as an industry we simply cannot afford to revisit the (almost) NIH syndromes that have scarred the evolution of middleware and software engineering in general over the past 4 decades. For instance, when Java came on the scene there was a rush to reimplement security, messaging, transactions etc. in this new, cool language. The industry and its users spent many years revisiting concepts, capabilities, services etc. that existed elsewhere and often had done so reliably and efficiently for decades, just so we could add the "Java" badge to them. OK, some things really did need reimplementing and rethinking (recall what I said about evolution), but certainly not as much as was reworked. This is just one example though: if you look back at DCE, CORBA, COM/DCOM, .NET etc. you'll see it has happened before in a very Battlestar Galactica-like situation.
Therefore, if we have to reimplement all of the core capabilities that have been developed over the years (even just this century) then we are missing the point and it really will take us another decade to get to where we need to be. However, don't read into this that I believe that current middleware solutions are perfect today either for Cloud applications or non-Cloud applications. We've made mistakes. But we've also gotten more things right than wrong. Plus if you look at any enterprise middleware stack, whether from the 21st or 20th centuries, you'll see many core capabilities or services are common throughout. Cloud does not change that. In my book it's about building on what we've done so far, making it "Cloud aware" (whatever that turns out to mean), and leveraging existing infrastructural investments both in terms of hardware and software (and maybe even peopleware).
Of course there'll be new things that we'll need to add to the infrastructure for supporting Cloud applications, just as JEE doesn't match CORBA exactly, or CORBA doesn't match DCE etc. There may be new frameworks and languages involved too. But this new Cloud wave (hmmm, mixing metaphors there I think) needs to build on what we've learned and developed rather than being an excuse to reimplement or remake the software world in "our" own image. That would be far too costly in time and effort, and I have yet to be convinced that it would result in anything substantially better than the alternative approach. If I were to try to sum up what I'm saying here it would be: Evolution Rather Than Revolution!