Monday, May 21, 2012
Jim Gray
Friday, April 27, 2012
Java Forum at the Titanic Centre
Java or the JVM
Friday, April 06, 2012
Transactions and parallelism and actors, oh my!
In just 4 years time I'll have spent 3 decades researching and developing transactional systems. I've written enough about this over the years to not want to dive in to it again, but suffice to say that I've had the pleasure of investigating a lot of uses for transactions and their variations. Over the years we've looked at how transactions are a great building block for fault tolerant distributed systems, most notably through Arjuna which with the benefit of hindsight was visionary in a number of ways. A decade ago using transactions outside of the database as a structuring mechanism was more research than anything else, as was using them in massively parallel systems (multi-processor machines were rare).
However, today things have changed. As I've said several times before, computing environments today are inherently multi-core, with true threading and concurrency, with all that that entails. Unfortunately our programming languages, frameworks and teaching methods have not necessarily kept pace with these changes, often resulting in applications and systems that are inherently unreliable or brittle in the presence of concurrent access and worse still, unable to recover from the resultant failures that may occur.
Now of course you can replicate services to increase their availability in the event of a failure. Maybe use N-version programming to reduce or remove the chances that a bug in one approach impacts all of the replicas. But whereas strongly consistent replication is relatively easy to understand, it has limitations which have resulted in weak consistency protocols that trade off things like performance and ease of use for application level consistency (e.g., your application may now need to be aware that data is stale.) This is why transactions, either by themselves on in conjunction with replication, have been and continue to be a good tool in the arsenal of architects.
We have seen transactions used in other frameworks and approaches, such as the actor model and software transactional memory, sometimes trading off one or more of the traditional ACID properties. But whichever approach is taken, the underlying fundamental reason for using transactions remains: they are a useful, straightforward and simple mechanism for creating fault tolerant services and individual objects that work well for arbitrary degrees of parallelism. They're not just useful for manipulating data in a database and neither are they to be considered purely the domain of distributed systems. Of course there are areas where transactions would be overkill or where some implementations might be too much of an overhead. But we have moved into an era where transaction implementations are lighter weight and more flexible than they needed to be in the past. So considering them from the outset of an application's development is no longer something that should be eschewed.
Back to the Dark Ages
The other day, due to severe snowstorms (for the UK), we ended up snowed in and without power or heating for days. During this time I discovered a few things. For a start, having gas central heating that is initiated with an electric started is a major flaw in the design! Next, laptop batteries really don't last long. And a 3G phone without access to 3G (even the phone masts were without power!) is a great brick.
But I think the most surprising thing for me was how ill prepared I was to deal with the lack of electricity. Now don't get me wrong - we've had power outages before, so had a good stock of candles, blankets, torches and batteries. But previous outages have been infrequent and lasted only a few hours, maybe up to a day. And fortunately they've tended to be in the summer months (not sure why). So going without wasn't too bad.
However, not this time and I've been trying to understand why. I think it's a combination of things. The duration for one, but also the fact that it happened during the week when I had a lot to do at work. Missing a few hours connectivity is OK because there are always things I can do (do better) when there are no interruptions from email or the phone. But extend that into days and it becomes an issue, especially when alternative solutions don't work, such as using my 3G phone for connectivity or to read backup emails.
What is interesting is that coincidentally we're doing a check of our processes for coping with catastrophic events. Now whilst I think that this power outage hardly counts as such an event, it does drive home that my own personal ability to cope is lacking. After spending a few hours thinking about this (I did have plenty of time, after all!) I'm sure there are things I can do better in the future, but probably the one place that remains beyond my control is lack of network (3G as a backup has shown itself to be limiting). I'm not sure I can justify a satellite link! So maybe I just take this as a weak link and hope it doesn't happen again. But we may be investing in a generator if this happens again.
Sunday, March 11, 2012
Big Data
Tuesday, February 21, 2012
Clouds for Enterprises (C4E) 2012
Sunday, February 19, 2012
HyperCard
When the Web came along it seemed so obvious the way that it worked. Hyperlinks between resources, whether they're database records (cards) or servers, makes a lot of sense for certain types of application. But extending it to a world wide mesh of disparate resources was a brilliant leap. I'm sure that HyperCard influenced the Web as it influenced several generations of developers. But I'm surprised with myself that I'd forgotten about it over the years. In fact it wasn't until the other day, when I was passing a shop window that happened to have an old Mac in it running HyperCard, that I remembered. It's over 20 years since those days, but we're all living under its influence.
Tuesday, February 14, 2012
Is Java the platform of the future?
I've mentioned before, but I think we are living in a period of time where a bigger explosion of programming languages is occurring than at any time in the past four decades. Having lived through a number of the classic languages such as BASIC, Simula, Pascal, Lisp, Prolog, C, C++ and Java, I can understand why people are fascinated with developing new ones: whether it's compiled versus interpreted, procedural versus functional, languages optimised for web development or embedded devices, I don't believe we'll ever have a single language that's right for all developer requirements.
This Polyglot movement is a reality and it's unlikely to go away any time soon. Fast forward a few years we may see a lot less languages around than today, but they will have been influenced strongly by their predecessors. I do believe that we need to make a distinction between the languages and the platforms that they inevitably spawn. And in this regard I think we need to learn from history now and quickly: unlike in the past we really don't need to reimplement the entire stack in the next cool language. I keep saying that there are core services and capabilities that transcend middleware standards and implementations such as CORBA or Java Enterprise Edition. Well guess what? That also means they transcend the languages in which they were written originally.
This is something that we realised well in the CORBA days, even if there were problems with the architecture itself. The fact that IDL was language neutral obviously meant your application could be constructed from components written in Java, COBOL and C++ without you either having to know or really having to care. Java broke that mould to a degree, and although Web Services are language independent, there's been too much backlash over SOAP, WSDL and friends that we forget this aspect at times. Of course it's an inherent part of REST.
However, if you look at what some are doing with these relatively new languages, there is a push to implement the stack in them from scratch. Now whilst it may make perfect sense to reimplement some components or approaches to take best advantage of some language capabilities, e.g., nginx; I don't think it's the norm. I think the kind of approaches we're seeing with, say, TorqueBox or Immutant where services implemented in one language are exposed to another in a way that makes them appear as if they were implemented natively, makes far more sense. Let's not waste time rehashing things like transactions, messaging and security, but instead concentrate on how best to offer these capabilities to the new polyglot movement that makes them fit in as first class citizens.
And to do this successfully is much more than just a technical issue; it requires an understanding of what the language offers, what the communities expect and working with both to fit in seamlessly. Being a Java programmer trying to push Java services into, say, Ruby, with a Java programmers approaches and understanding, will not guarantee success. You have to understand your users and let them guide you as much as you guide them.
So I still believe that in the future Java will, should and must play an important part in Cloud, mobile, ubiquitous computing etc. It may not be obvious to developers in these languages that they're using Java, but then it doesn't need to be. As long as they have access to all of the services and capabilities they need, in a way that feels entirely natural to them, why should it matter if some of those bits are hosted on or by a Java application server, for instance? The answer is that it shouldn't. And done right it means that these developers benefit from the maturity and reliability of these systems, built up over many years of real world deployments. Far better than the alternative.
Thursday, February 09, 2012
The future of Java
Tuesday, January 31, 2012
Blogging versus tweeting?
Sunday, January 01, 2012
Transactions on Android
Saturday, December 31, 2011
PaaS 2.0?
Friday, December 23, 2011
Future of Middleware
Sunday, November 20, 2011
Wave sick?
Thursday, November 03, 2011
The future PC
Friday, October 28, 2011
HPTS 2011
This year was a mix too, with the main theme of cloud. (Though it turned out that cloud wasn't that prevalent). I think the best way to summarise the workshop would be concentrating on high performance and large scale (up as well as out). With talks from the likes of Amazon, Facebook and Microsoft we covered the gamut of incredibly large numbers of users (700 million) through eventual consistency (which we learnt may eventually not be enough!)
Now it's important to realise that you never go to HPTS just for the talks. The people are more than 50% of the equation for this event and it was good to see a lot of mixing and mingling. We had a lot of students here this time, so if past events are anything to go by I am sure we will see the influence of HPTS on their future work. And I suppose that just leaves me to upload the various presentations to the web site!