Saturday, December 31, 2011

PaaS 2.0?

A while ago I has some things to say about people trying to add a version number to SOA. At the time it was 2.0 and I like to think I had a little to do with the fact that it died almost as quickly as it was created. I won't go into details, but the interested reader can catch up on it all later.

Now a friend who got caught in the SOA 2.0 crossfire came to me recently and pointed out that some people are now trying to coin the term 'PaaS 2.0' and asked my opinion. At first I didn't know what to think because the original reasons I was against SOA 2.0 didn't seem to apply here because PaaS is so poorly understood. There are no fundamental architectural principles around which it has grown. There are very few examples that everyone agrees upon. There's not even an accepted definition!

But that's when it hit me! How can you assign a version to something that is so I'll defined? It's not like the Web, for instance, where it made sense to have a 2.0. Ok there's some good stuff from the likes of NIST, but there's no agreed reference architecture for PaaS, so how precisely can you say something is PaaS 2.0? The answer is that you can't. Now that doesn't mean you won't be able to do so eventually, but there are quite a few prerequisites that have to be satisfied before that can occur.

So what does this mean? Am I against PaaS 2.0 as I was with its SOA cousin? Yes I am, but for different reasons. As I outlined above, I think it's wrong to try to version something that it so ill defined. Let's figured out what PaaS 1.0 is first!

Friday, December 23, 2011

Future of Middleware

I think it's fair to say that despite my years in industry I'm still an academic at heart. I like the ability to spend time working on a problem without the usual product deadlines. Of course there's the potential that you come up with something that has little relevance to the real world, but that can be mitigated by staying close to industry through funding, sponsorship or other relationships. Often in industry we don't have the luxury of spending years coming up with the perfect solution and whilst it's for very good reasons, it can be frustrating at times for those involved.

But we all make the best of what we have to work with and I love my current position, despite the fact I get to spend less time researching than I would like. In fact in some ways I now understand what Santosh has been doing for years in directing and pushing others in the right directions, whilst at the same time wanting to get more involved himself but not quite having enough time to do it all.

Therefore, I take any opportunity I can find to dive back into research, write papers, code etc. And attend, and possibly/hopefully present at conferences and workshops that are often dominated by the research community, though obviously with practical overtones. The Middleware conference is one such event that I love to participate with in one way or another. Over the years I've had papers there and been on the program committee, and not once have I been disappointed by the quality of submissions.

So it was great to be asked to write a paper with Santosh and Stuart on the future of middleware for FOME. Truth be told, Santosh did the bulk of the writing and his co-authors provided the disparate data and input that he's excellent at being able to form into a coherent whole. The result is a great paper that I presented in Portugal earlier this month. It went down well and I got a lot of good feedback, both from the academics present as well as industrial participants.

But the real high for me was just being at the workshop and listening to all of the other presentations. I had a wonderful time meeting with others there and getting as immersed in the research atmosphere as it's possible to do in 48 hours. I could cast my mind back many years to when I was in full-time research and compare and contrast with today. I got a lot out of the many conversations I had with researchers, both old and new to the field. I hope I had a positive impact on them too, because I came away invigorated and my mind full of new possibilities.

Sunday, November 20, 2011

Wave sick?

What with HPTS, JUDCon, JAX, QCon, JavaOne and various business meetings, I've been doing a lot of traveling recently. Time spent on a plane usually means my mind wanders a bit, covering various topics some of them unrelated. One of the things I got thinking about though, was definitely influenced by a series of talks I've been giving for a while, including at my JBossWorld keynote: the history of distributed systems.

I covered it at Santosh's retirement event too, but from a very personal perspective, i.e., how Arjuna related to it, and relate it did, often in a big way. So this got me to thinking about the various technology waves I've lived through and helped influence in one way or another. And it was quite "chilling" for me to realise how much I'd forgotten about what's happened over the past third of a century or more! (And that made me feel old too!)

I often take for granted that I lived through the start of the PC era: there were no PCs when I first started to code. In fact I'd been developing applications on a range of devices before IBM came out with the first PC or before Microsoft came with the first version of Word. I moved through the era of the BBC, ZX80, Commodores, Ataris, etc. into the first Sun machines, Apples, PCs, laptops, desktops, PDAs, smartphones, pads and much much more. A huge change in the way we interact with computers and importantly the data they maintain. Many different paradigm shifts!

Looking at the middleware shifts that accompanied some of these hardware changes and in fact were often driven by them, I've ridden a number of equally important waves. RPC, distributed objects, bespoke enterprise middleware architectures and implementations, standards based, a number of times there have been explosions of languages including functional and object-oriented, Java, open source, Web Services, REST, mobile, ubiquitous computing, and of course fault tolerance running throughout with transactions and replication. And I'm probably forgetting other things in between these waves.

It's been a bumpy ride at times. The move from CORBA to J2EE wasn't necessarily as good as is could have been. Web Services were often vilified far more than made sense if you were objective. But overall I've enjoyed the ride so far, more or less. And now with Cloud, mobile and beyond it's looking like the next decade will be at least as interesting as the last three. I'm looking forward to playing my part in it!

Thursday, November 03, 2011

The future PC

I've been thinking a lot about what personal compute devices might look like in the future given the amount of time I've been looking at how things have evolved over the past 30 years. Not so much about what a mainframe or workstation computer might look like (assuming they even exist!) but what replaces your laptop, pad, phone etc. Now of course much of what I will suggest is influenced by how I would do it if I could. However, there's also a smattering of technical advancements in there for good measure.

So my biggest bugbear with my current situation is that I have a laptop (two if I include my personal machine), an iPad and a smartphone (two if I include the one I use for international travel). Each of them holds some subset of my data, with only one (laptop) holding it all. Plus some of that data is held in the cloud so I can share it with people or between devices. This is manageable at the moment, but it's frustrating when I need something on my iPad that's on my laptop or something on my phone that's on the iPad (you get the picture).

What I want is the same information available on all of these devices. In fact, what I want is one device that does it all. I rarely use my phone and pad concurrently, or my pad and laptop. There are exceptions of course, but bear with me. (I may be unique in this and some people might want multiple concurrent devices. But that's still possible in this environment.) What would typically satisfy me would be a way to modify the form factor of my device dynamically over time. Taking a touchscreen smartphone through a pad and then to a laptop with large screen, keyboard and trackpad. At each stage I'd like the best performance, graphically and compute, and the most amount of storage.

Is this possible? Well if you look at how hardware had evolved over the past decades it's not that far off. ARM dominates the smartphone arena and although Intel/AMD will eventually find a way into the market my money is on AMD to get to the laptop and workstation performance before they get to the low power consumption sector in any significant manner. So AMD powered laptops that perform equally with their Intel/AMD cousins aren't far off.

What about main memory? Well you only have to look at how things have evolved recently from 512meg through to 8gig and beyond. It's going to be possible to have 8gig smartphones and tablets soon. And SSDs are getting cheaper and cheaper by the month. Capacity-wise it may take them longer to get to the sizes of spinning disks, but once most laptop manufacturers include SSDs by default, the cost per Gig will plummet as their physical sizes continues to do so too. Putting multiple instances in the same device will be possible to fill the size gap too.

Now you could assume that what I'm outlining is a portable disk drive, but it really isn't. I'm assuming it has storage, of course, but I'm also assuming it has a CPU and probably a GPU. Think plug computer, but much smaller and with much more power: certainly the processing power to rival a laptop and probably the graphical power too. I say 'probably' only because I can see situations where the GPU could be part of the form factor you plug the device in to so that you can do work, e.g., the phone housing or the keyboard/screen.

Ok so there we are: my ideal device is the size of a gum packet (much smaller and you'll lose it) and can be plugged into a range of different deployment chassis. Now all I have to do is wait!

Friday, October 28, 2011

HPTS 2011

I'm back from HPTS and as usual it was a great snapshot of the major things happening or about to happen in our industry. In past years we've had Java transactions, ubiquitous computing, transactions for the Internet and the impacts of large scale on data consistency. We've also had discussions on the possible impact of future (back then) technologies such as SSD and flash on databases and transactions.

This year was a mix too, with the main theme of cloud. (Though it turned out that cloud wasn't that prevalent). I think the best way to summarise the workshop would be concentrating on high performance and large scale (up as well as out). With talks from the likes of Amazon, Facebook and Microsoft we covered the gamut of incredibly large numbers of users (700 million) through eventual consistency (which we learnt may eventually not be enough!)

Even 25 years after it started it's good to see many of the original people behind transaction processing and databases still turning up. (I've only been going since the 90s.) This includes Stonebraker (aka Mr Alphabet Soup), who gave his usual divisive and entertaining talks on the subject of SQL and how to do it right this time, and of course Pat, who instructed us never to believe a #%^#%-ing word he says as he's now back on the side of ACID and 2PC! (All in good fun, of course.)

Now it's important to realise that you never go to HPTS just for the talks. The people are more than 50% of the equation for this event and it was good to see a lot of mixing and mingling. We had a lot of students here this time, so if past events are anything to go by I am sure we will see the influence of HPTS on their future work. And I suppose that just leaves me to upload the various presentations to the web site!

Thursday, October 13, 2011

Where have all the postings gone?

I know I've been blogging a lot this year, yet when I look at the entry count for this blog it's not as high as I had expected. Then I realised that most of my attention has been directed at JBoss. So if you're wondering where I am from time to time, it's over here.

RIP Dennis Ritchie

It's safe to say that no programming language has had as big an impact on my career as C. It's also safe to say that no operating system has had as big an impact on my career as Unix. So for these reasons and many others it is sad to hear about the passing of Dennis Ritchie. I met him once, many years ago, when he visited the University and spoke about a range of things, including C and Plan 9. He was a great speaker, someone who helped shape the world we live in, and a nice man. Yet another sad day.

Sunday, October 09, 2011

Bluestone and Steve Jobs

Amongst all of the various articles and tributes to Steve Jobs I came across this from Bob, who is one of the people that has influenced my career significantly (and positively!) over the years. So it's interesting to read the influence Steve had on Bob, Bluestone, HP and hence Arjuna, JBoss and now Red Hat (not forgetting the other companies with which Bob's been involved over the years)! Thanks Bob and thanks, indirectly, Steve.

Thursday, October 06, 2011

A sad day for Apple and the world

I'm an Apple user and have been for many years (since the 1990's). I've had desktops, laptops, ipods, iphones and of course various software. I've been an admirer of Apple and Steve Jobs for just as long, so it's really sad to hear that he has passed away today. The world is a little bit darker now, but I hope his legacy lives on. My thoughts go out to his family.

Sunday, September 11, 2011

September 11th

It's that time of year again but of course this time it's 10 years on. Time to reminisce and remember those who weren't so lucky. I've been thinking about this day for a while and wondering about all of those things that I managed to do in the last decade that I wouldn't have been able to if I'd made a slightly different choice back then. They include many of the things I mentioned previously, such as HP, Arjuna Technologies, JBoss, Red Hat, standards involvement etc.

But they all pale into insignificance when I look at my 9 year old son! And then there's nothing more I can really say except thanks.

Sunday, September 04, 2011

The impact of Arjuna

I've mentioned before that I have the privilege of speaking at Santosh's retirement ceremony. I've also said on several occasions how much Santosh and the Arjuna project have influenced my life over the years. So I decided to speak about the transition of Arjuna from a research project that was originally just the vehicle for several of us to get our PhDs, through to today when it's at the heart of the most downloaded application server in the world.

Fairly obviously I have lived through the transition over the past 25 years. And despite having parted ways with my company in 2005, I've been able to continue to work with them, as well as obviously shepherding the transaction system through JBoss and Red Hat. However, it wasn't until I started to write my presentation that everything we've done over the years came back to me. (I suppose that being so close to things sometimes makes you forget.) I found it really hard to cram 25 years into a 60 minute session, so many things had to be left out or confined to a single bullet. For a start, when Arjuna was still a research effort it managed to help at least a dozen of us get PhDs, was the basis for over 50 papers and technical reports, and influenced distributed systems research and companies from IBM to Sun Microsystems.

But it's when you look beyond the research that the real impact becomes apparent. For a start, in 1994 we used it to implement a distributed student registration system that is still not matched by the one now provided by a certain large business management software purveyor. In 1995 the OTS was being developed and that was already influenced by Arjuna, since Graeme was now at Transarc. It wasn't too long before we began to implement an OTS compliant transaction system using Arjuna and this was my first dealing with standards. We also got involved with IBM, Alcatel and others in defining standards for extended transactions through the CORBA Activity Service (which would later be the basis for the various Web Services transactions efforts.) At about the same time Stuart was driving the workflow submission with Nortel and working on OpenFlow.

Then in 1996 Sun released Oak, later to become Java. We all started to use it in a number of areas, including games, a browser (great way to learn HTML) and a web server. I looked at end-to-end transactions and then decided that an even better way to learn the language would be to implement Arjuna in Java. Over two weeks at Christmas 1996 JavaArjuna was born (later to become JTSArjuna when I ported the OTS.) This was before J2EE, before JTA and before JTS. So not only was this the worlds first 100% pure Java JTS, it was the worlds first 100% pure Java transaction service.

It was round then that we created a company to market the Java and C++ implementations. We were acquired by Bluestone, which was subsequently acquired by HP and Arjuna went into their product suites to compete against BEA and IBM (there was no sign of Oracle middleware in those days!) While our time at HP was limited, we still managed to work on two Web Service transactions standards efforts as well as produce the worlds first such product. We also branched out into high performance messaging and building an ORB.

When HP decided it couldn't make a go of software, we created another startup to concentrate on transactions and messaging. We had several successful years, making sales to the likes of TIBCO and WebMethods, creating two new Web Service standards committees in OASIS and finalising two of them (BTP and WS-TX). We also found a market by replacing the transactions and messaging components in JBoss 3 with our own. And within all this, there was still time to write many papers, give many presentations and more worlds firsts, such as XTS.

As I said earlier, in 2005 we sold transactions to JBoss and I bid farewell to Arjuna the company, though obviously Arjuna the technology stayed pretty close! Over the intervening 6 years and an acquisition by Red Hat, we've seen Arjuna (aka JBossTS) incorporated into every version of AS as well as all of our platforms and many projects, even if they're not written in Java. The teams have branched out into REST as well as Blacktie, to offer XATMI support. There's also work on software transactional memory using JBossTS and now, with the move of Red Hat into the cloud, it's available in OpenShift and beyond.

Even this blog is way too short to cover everything that has happened on this 25 year long journey. I haven't been able to cover other aspects such as OpenFlow and messaging, or the impact of the people who have passed through the Arjuna project and Arjuna companies. I've also only hinted at how all of the research we did at the University or in industry has influenced others over the years. I think in order to really do the Arjuna story justice I need to write a book!

Monday, August 29, 2011

Enterprise middleware and PaaS

I wanted to say more about why existing enterprise middleware stacks can be (should be) the basis for realistic PaaS implementations. If I get time, I may write a paper and submit it to a journal or conference but until then, this will have to do. I'm talking about this at JavaOne this year too, so a presentation may well come out soon.

Sunday, August 21, 2011

Fault tolerance

There was a time when people in our industry were very careful about using terms such as fault tolerance, transactions and high availability, to name just three. Back before the Internet really kicked off (really when the web came along), if you were emailing someone then they tended to either be in academia and in which case they'd be summarily shot for misusing a term, or they'd be in the DoD and in which case they'd probably be shot too! If you were publishing papers or your thoughts for wider review, you tended to have to wait for a year to see publication and that was if reviewers didn't shoot you down for misusing terms, and in which case you had to start all over again. So it paid to think long and hard before you did the equivalent of hitting submit.

Today we live in a world of instant publishing and less and less peer review. It's also unfortunate that despite the fact more and more papers, article and journals are online, it seems that less and less people are spending the time to research things and read up on state of the art, even if that art was produced decades earlier. I'm not sure if this is because people simply don't have time, simply don't care, don't understand what others have written, or something else entirely.

You might ask what it is that has prompted me to write this entry? Well on this particular occasion it's people using the term 'fault tolerance' in places where it may be accurate when considering the meaning of the words in the English language, but not when looking at the scientific meaning, which is often very different. For instance, let's look at one scientific definition of the term (software) 'fault tolerance'.

"Fault tolerance is intended to preserve the delivery of correct service in the presence of active faults. It is generally implemented by error detection and subsequent system recovery.
Error detection originates an error signal or message within the system. An error that is present but not detected is a latent error. There exist two classes of error detection techniques: (a) concurrent error detection, which takes place during service delivery; and (b) preemptive error detection, which takes place while service delivery is suspended; it checks the system for latent errors and dormant faults.
Recovery transforms a system state that contains one or more errors and (possibly) faults into a state without detected errors and faults that can be activated again. Recovery consists of error handling and fault handling. Error handling eliminates errors from the system state. It may take two forms: (a) rollback, where the state transformation consists of returning the system back to a saved state that existed prior to error detection; that saved state is a checkpoint, (b) rollforward, where the state without detected errors is a new state."

There's a lot in this relatively simple definition. For a start, it's clear that recovery is an inherent part, and that includes error handling as well as fault handling, neither of which are trivial to accomplish, especially when you are dealing with state. Even error detection can seem easy to solve if you don't understand the concepts. Over the past 4+ decades all of this and more has driven the development of protocols behind transaction processing, failure suspectors, strong and weak replication protocols, etc.

So it's both annoying and frustrating to see people talking about fault tolerance as if it's as easy to accomplish as, say, throwing a few extra servers at the problem or restarting a process if it fails. Annoying in that there are sufficient freely available texts out there to cover all of the details. Frustrating in that the users of implementations based on these assumptions are not aware of the problems that will occur when failures happen. As with those situations I've come across over the years where people don't believe they need transactions, the fact that failures are not frequent tends to lull you into a false sense of security!

Now before anyone suggests that this is me being a luddite, I should point out that I'm a scientist and I recognise fully that theories and practices in many areas of science, e.g., physics, are developed based on observations and can change when they prove to not be sufficient to describe the things you see. So for instance, unlike those who in Galileo's time continued to believe the Earth was the centre of the Universe despite a lot of data to the contrary, I accept that theories, rules and laws laid down decades ago may have to be changed today. The problem I have in this case though, is that nothing I have seen or heard in the area of 'fault tolerance' gives me an indication that this is the situation currently!

Tuesday, August 09, 2011

A thinking engineer

I've worked with some great engineers in my time (and continue to work with many today), and as an aside, I like to think some people might count me in their list. But back to topic: over the years I've also met some people who would be considered great engineers by others, but I wouldn't rate that high. The reason for this is also one of the factors that I always cite when asked what constitute a great engineer. Of course I rate the usual things, such as ability to code, understand algorithms, and know a spin-lock from a semaphore. Now maybe it's my background (I really think not, but threw that out there just in case I'm wrong) but I also add the ability to say no, or ask why or what if? To me, it doesn't matter whether you're an engineer or an engineering manager, you've got to be confident enough to question things you are asked to do, unless of course you know them to be right from the start.

As a researcher, you're expected to question the work of others who may have been in the field for decades, published dozens of papers and be recognised experts in their fields. You don't take anything at face value. And I believe that that is also a quality really good engineers need to have too. You can be a kick-ass developer, producing the most efficient bubble-sort implementation available, but if it's a solution to the wrong problem it's really no good to me! I call this The Emperor's New Clothes syndrome: if he's naked then say so; don't just go with the flow because your peers do.

Now as I said, I've had the pleasure to work with many great engineers (and engineering managers) over the years, and this quality, let's call it "thinking and challenging" is common to them all. It's also something I try to foster in the teams that work for me directly or indirectly. And although I've implicitly been talking about software engineering, I suspect the same is true in other disciplines.

True Grit update

A while ago I mentioned that I was reading the novel True Grit and was a fan of the original film, which I watched when I was a child. I also mentioned that I probably wouldn't be watching the remake of the film as I couldn't see how the original could be improved. Well, on the flight back from holiday I had the opportunity to watch it and decided to give it a go.

I've heard a few things about the new film and they can all me summarised as saying that it was a more faithful telling of the story than the John Wayne version. After watching both, and reading the book, I have to wonder if those reviewers knew WTF they were on about! Yes the new film is good, but it's no where near as good as the original. And as for faithfulness to the book? Well with the exception of the ending, the original film is far closer to the book (typically word for word). While watching the remake I kept asking myself time and again why had they changed this or that, or why had they completely rewritten the story in parts?!

If you have a great novel that you know works well on screen, why do script writers seem incapable of leaving it untouched? Maybe they decided that they had to make the new film different enough from the original so people wouldn't think it was a scene-for-scene copy. But in that case, why remake it in the first place? FFS people: if an original film is good enough, leave it alone and learn to produce some original content for once! And for those of you interested in seeing the definitive film adaptation of the book, check out the John Wayne version.

Friday, July 29, 2011

Gone fishing!

I'm on holiday in Canada, visiting my in-laws. Usually it takes me a few days to wind down from work, but it happens and I relax for the rest of the holiday. (Well, until a few days before I cone back, when I start to think about work again!) Access to email is limited usually, as I'd need to borrow time on my father-in-laws machine. That extra effort is usually enough for me to only check email every few days.

Unfortunately this time I brought my iPad and iPhone, both of which I connected to the wifi. Checking email was too easy and as a result I was working every day! Fortunately it only took me about 4 days to realise this (with some not-so-subtle hints from family) and I disabled wifi. This means I can now get on with the holiday. Since we are out in the middle of nowhere this means sitting by the pool reading a book on my (wifi disabled) iPad, or fishing!

Facebook as Web 3.0?

I'm not on Facebook and think social networking sites are inherently anti-social (you can't beat a good pub!) However, I know many people who are into them and I've even decided to check out Google+. So they probably have a place in the web firmament.

But recently I've started to see more and more adverts substituting the good old vendor URL for a Facebook version, e.g., moving from www.mycompany.com to www.Facebook.com/mycompany. Now at first this might seem fairly innocuous, but when you dig deeper it's anything but! As I think Tim Berners-Lee has stated elsewhere, and I'm sure Google has too, the data that Facebook is maintaining isn't open for a start, making it harder to search outside of their sub-web. And of course this is like a data cloud in some ways: you're offshoring bits of your data to someone else, so you'd better trust them!

I don't want to pick on any single vendor, so let's stop naming at this point. Even if you can look beyond the lack of openness and the fact that you're basically putting a single vendor in charge of this intra-web, what about all of the nice things that we take for granted from http and REST? Things such as cacheing, intelligent redirects and HATEOAS. Can we be sure that these are implemented and managed correctly on behalf of everyone?

And what's to say that at some point this vendor may decide that Internet protocols are just not good enough or that browsers aren't the right view on to the data? Before you know it we would have a multiverse of Webs, each with their own protocols and UIs. Interactions between them would be difficult if not impossible.

Now of course this is a worst case scenario and I have no idea if any vendors today have plans like this. I'd be surprised if they hadn't been discussed though! So what does this mean for this apparent new attitude to hosting "off the web" and on the "social web"? Well for a start I think that people need to remember that despite how big any one social network may be, there are orders of magnitude more people being "anti-social" and running on the web.

I'm sure that each company that makes the move into social does so on the back of sound marketing research. Unfortunately the people making these decisions aren't necessarily the ones who understand what makes the web work, yet they are precisely the people who need it to work! I really hope that this isn't a slippery slope towards that scenario I outlined. Everyone on the web, both social and anti-social, would lose out in the end! Someone once said that "just because you can do something doesn't mean you should do something."

Thursday, July 21, 2011

The end of a space era

It's sad to see the end of the space shuttle era. I remember being excited to watch the very first launch whilst at school. I remember exactly where I was when Challenger was destroyed: at university stood in a dinner queue. I remember watching when they deployed (and then later fixed) Hubble. Again, I can remember where I was when Columbia was destroyed: at home watching! I've even been to see a launch and heard it come back a week or so later.

So it's fair to say that I grew up with the shuttle over these past 30 years and it's going to be strange not having it around any more. Despite the fact that it may never have been the perfect launch vehicle (I still recall early discussions around HOTOL, for instance), I think it did it's job well. I know I'll miss it.

Tuesday, July 19, 2011

Santosh's retirement

I think I've spoken a number of times about how important Professor Shrivastava has been in my academic and professional career over the past 25 years (ouch!) Well he's retiring soon and the University will never quite be the same, at least as far as I'm concerned. But at least I get a chance to speak at his retirement event. Congratulations Santosh and many thanks!

Monday, July 18, 2011

InfoQ and unREST

I wrote this article for InfoQ because I thought what JJ had said was interesting enough that a wider audience should consider it. I'm still not sure if I'm pleased, surprised or disappointed with the level of comments and discussion that it received. Something for me to contemplate when I'm on vacation I suppose.