Saturday, March 21, 2015

Six Years as JBoss CTO

It's coming up to 6 years since Sacha asked me to take over from him as CTO. It was a privilege and it remains so today. Despite the facts that Arjuna was involved with JBoss for many years prior to my officially joining in 2005 and that my good friend/colleague Bob Bickel was a key member of JBoss, I didn't join JBoss officially until 2005. I found it to be a great company and a lot more than I ever expected! A decade on and now part of Red Hat, it remains an exciting place to work. I wake up each morning wondering what's in store for me as CTO and I still love it! Thanks Marc, Sacha and Bob!

Sunday, March 15, 2015

Restrospecting: Sun slowly setting

I remember writing this entry a few years back (2010), but it appears I forgot to hit submit. Therefore, in the interests in completeness I'm going to post it even though it's 5 years old!

--

So the deal is done and Sun Microsystems is no more and I'm filled with mixed emotions. Let's ignore my role within Red Hat/JBoss for now, as there are certainly some emotions tied up in that. It's sad to see Sun go and yet I'm pleased they haven't gone bust. I have a long history with Sun as a user and, through Arjuna, as a prospective partner. When I started my PhD back in the mid 1980's my first computer was a Whitechapel, the UK equivalent to the Sun 360 at the time. But within the Arjuna project the Sun workstation was king and to have one was certainly a status symbol. I remember each year when the new Sun catalogue came out or we had a new grant, we'd all look around for the next shiney new machine from Sun. We had 380s, Sparc, UltraSparc and others all the way through the 1990's (moving from SunOS through to the Solaris years). The Sun workstation and associated software was what you asipired to get, either directly or when someone left the project! In fact the Arjuna project was renowned within the Computing Department for always having the latest and greatest Sun kit.

The lustre started to dim in the 1990's when Linus put out the first Linux distribution and we got Pentium 133s for a research grant into distributed/parallel computing (what today people might call Grid or Cloud). When a P133 running Linux was faster than the latest Sun we knew the writing was on the wall. By the end of the decade most Sun equipment we had was at least 5 years old and there were no signs of it being replaced by more Sun machines.

Throughout those years we were also in touch with Sun around a variety of topics. For instance we talked with Jim Waldo about distributed systems and transactions, trying to persuade them not to develop their own transction implementation for Jini but to use ours. We also got the very first Spring operating system drop along with associated papers. We had invitations to speak at various Sun sites as well as the usual job offers. Once again, in those days Sun was the cool place to be seen and work.

But that all started to change as the hardware dominance waned. It was soon after Java came along, though I think that is coincidental. Although Solaris was still the best Unix variant around, Linux and NetBSD were good enough, at least in academia. Plus they were a heck of a lot cheaper and offered easier routes to do some interesting research and development, e.g., we started to look at reworking the Newcastle Connection in Linux, which would have been extremely difficult to do within Solaris.

Looking back I can safely say that I owe a lot to Sun. They were the best hardware and OS vendor in the late 1980's and early 1990's, providing me and others in our project with a great base on which to develop Arjuna and do our PhDs. In those days before Java came along they were already the de facto standard for academic research and develpment, at least with the international communities with which we worked. I came to X11, Interviews, network programming, C++ etc. all through Sun, and eventually Java of course. So as the Sun sinks slowly in the west I have to say a thanks to Sun for 20 years of pleasant memories.

Chromebook?

I travel a lot and until a couple of years ago I lugged around a 17" laptop. It was heavy, but it was also the only machine I used and I don't use an external monitor at home, just the office, so I needed the desktop space. But it was heavy and almost unusable when on a plane, especially if the person in front decided to recline their seat! Try coding when you can't see the entire screen!

In the past I'd tried to have a second smaller laptop for travelling, but that didn't work for me. Essentially the problem was once of synchronisation: making sure my files (docs, source, email etc.) which mainly resided on my 17" (main) machine were copied across to the smaller (typically 13") machine and back again once I was home. Obviously it's possible to do - I did it for a couple of years. But it's a PITA, even when automated. So eventually I gave up and went back to having just one machine.

Enter the tablet age. I decided to try to use a tablet for doing things when on a plane, but still just have a single machine and travel with it. That worked, but it was still limiting, not least because I don't like typing quickly on a touch screen - too many mistakes. Then I considered getting a keyboard for the tablet and suddenly thought about a Chromebook, which was fortunate because I happened to have one that was languishing unused on a shelf.

Originally I wasn't so convinced about the Chromebook. I'd lived through the JavaStation era and we had one in the office - one of the very first ever in the UK. Despite being heavy users of Java, the JavaStation idea was never appealing to me and it didn't take off despite a lot of marketing from Sun. There were a number of problems, not least of which were the lack of applications and the fact that the networks at the time really weren't up to the job. However, today things are very different - I use Google docs a lot from my main development machine as well as from my phone.

Therefore, although the idea of using a Chromebook hadn't appealed initially, it started to grown on me. What also helped push me over the tipping point was that I grew more and more disinterested in my tablet(s). I don't use them for playing games or social media; typically they are (were) used for email or editing documents, both of which I can do far better (for me at least) on a Chromebook.

So I decided that I could probably get away with a Chromebook on a plane, especially since it has offline capabilities - not many planes have wifi yet. But then I began to think that perhaps for short trips I could get away with it for everything, i.e., leave the main machine at home. Despite the fact I don't get a lot of time to spend coding, I still do it as much as I can. But for short trips (two or three days) I tend to only be working on documents or reading email. If I didn't take my main machine I wouldn't be able to code, but it wouldn't be such a problem. I also wouldn't need to sync code or email between the Chromebook and my main machine.

Therefore, I decided to give it a go earlier this year. I'd used the Chromebook at home quite extensively but never away from there. This meant there were a few teething problems, such as ensuring the work VPN worked smoothly and installing a few more applications that allowed for offline editing or video watching instead of real-time streaming. It wasn't a perfect first effort, but I think it was successful enough for me to give it another go next time I travel and know I won't need to code. I've also pretty much replaced my tablets with the Chromebook.

Saturday, March 07, 2015

More thoughts on container-less development


It's time to get back to the concept of container-less development. I gave an outline of my thinking a while back, but let's remember that I'm not talking about containers such as docker or rocket, though they do have an impact on the kinds of containers I am concerned about: those typically associated with application servers and specifically Java application servers. Over the years the Java community has come to associate containers with monolithic J2EE or Java EE application servers, providing some useful capabilities but often in a way which isn't natural for the majority of Java developers who just "want to get stuff done."

Now of course those kinds of containers did exist. Probably the first such container was what came out of implementing the J2EE standard and that itself evolved from CORBA, which despite some bad press, wasn't as bad as some people make out (though that's perhaps a topic for a different entry.) CORBA was based around the concept of services, but back then natural unit of concurrency was the operating system process because threading wasn't a typical aspect of programming languages. Early threading implementations such as using setjmp/longjmp or Sun's LWP package for SunOS/Solaris, were very much in their infancy. When Java came along with its native support for threads, CORBA was still the popular approach for enterprise middleware, so it was fairly natural to try to take that architecture and transplant it into Java. What resulted was the initial concept of a container of interacting services minus the distribution aspect to improve performance. (It's worth noting that the success of Java/J2EE, the inclusion of Java as a supported language for CORBA, and the increase in thread support for other languages resulted in a reverse imitation with the CORBA Component Model Architecture.)

Now of course there's a lot more to a container these days than the services (capabilities) it offers. But in the good 'ol days this approach to kickstarting the J2EE revolution around CORBA resulted in some inefficient implementations. However, as they say, hindsight is always 20/20. What exacerbated things though is that despite gaining more and more experience over the years, most J2EE application servers didn't really go back to basics and tackle the problem with a clean slate. Another problem, which frameworks such as Spring tried to address, was that CORBA didn't really have a programming model, but again that's for another entry.

Unfortunately this history of early implementations hasn't necessarily always had a positive impact on current implementations. Change can be painful and slow. And many people have used these initial poor experiences with Java containers as a reason to stay away from containers entirely. That is unfortunate because there is a lot of good that containers mask and which we take for granted (knowingly or unknowingly.) They include things such as connection pooling, classloader management, thread management, security etc. Of course as developers we were able to manage much of these things before containers came on the scene and to this day. CORBA programmers did the exact same thing (anyone remember dependency hell with shared libraries in C/C++?) But for complex applications, those (in a single address space) that grow in functionality and typically built by a team or from components built by different programmers, handling things yourself can become almost a full time job in itself. These aspects of the container are useful for developers.

It's important to understand that some containers have changed for the better over the years. They've become more streamlined, fit for purpose and looking at the problem domain from a whole new perspective. The results are lightweight containers that do a few core things that all (majority) developers will always need really well and anything else is an add-on that is made available to the developer or application on an as-needed basis. The idea is that typically any of these additional capabilities are selected (dynamically or statically) with the understanding of the trade-offs they may represent, e.g., overhead versus functionality, so the selection to enable them is made as an informed choice rather than imposed by the container developers. Very much like the micro-kernels that we saw developing from the old monolithic operating systems back in the 1980's. So even if you're not
a Java EE fan or don't believe you need all of the services that often come out of the box such as transactions, messaging and security, the container's probably doing some goodness for you that you'd rather not want to handle manually.

Apart from the complexity and overhead that containers may provide (or are assumed to provide), there's the ability to dynamically update the running instance. For instance, adding a new service that wasn't available (or needed) at the time the container booted up. Or migrating a business object from one container to another which may require some dependent services to be migrated to the destination container. Or patching. Or simply adding a more up-to-date version of a service whilst retaining the old service for existing clients. Not all containers support this dynamism, but many do and it's a complexity that does not come without cost, even for the most efficient implementations.

Whether or not you agree with it, it should be apparent by now why there's a growing movement away from containers. Some people have experienced the overhead some containers impose for very little value. Some people haven't, but trust the opinions of their colleagues and friends. Still others have never been keen on containers in the first place. Whatever the reasons, the movement does exist. And then along come the new generation of (different) containers, such as docker and rocket which some in the container-less movement believe obviate the need for old-style containers entirely. The argument goes something like this (I'll use docker as an example simply because if I use the term container here it will become even more confusing!): docker produces immutable images and is very easy to use, so rather than worry about creating dynamically updateable containers within it, the old style "flat classpath"/container-less development strategies make more sense. In other words, work with the technology to get the best out of what it provides, rather than try to do more that really doesn't make sense and give you any real benefit.

This is a good argument and one that is not wrong. Docker images are certainly immutable and fairly easy to use. But that doesn't mean they obviate the need for Java containers. You've got to write your application somehow. That application may be complex, built by a team or built from components created by developers from different organisations over a period of years. And the immutability aspect of docker images is only true between instantiations of the image, i.e., the state of a running image can be changed, it's just that once it shuts down all changes are lost and any new instance starts from scratch with the original state. But docker instances may run for a long time. If they're part of a high-availability instance, with each image a replica of the others, then the replica group could be running indefinitely and the fact that changes occur to the state of one means that they are applied to the others (new replicas would have their state brought up to date as they join the group). Therefore, whilst immutability is a limitation it's no different than only having an in-memory database, for example, which has no persistent backing store: it can be architected around and could be a performance benefit.

If you believe that the mutability of a running docker instance is something that makes sense for your application or service, then long running instances are immediately part of your design philosophy. As a result, the dynamic update aspect of containers that we touched on earlier immediately becomes a useful thing to have. You may want to run multiple different instances of the "same" service. You may need to patch a running instance(s) whilst a pre-patched image is deployed into the application or replica group (eventually the pre-patched docker instances will replace the in-memory patches versions by natural attrition.)

And then we have microservices. I've said enough about them so won't go into specific details. However, with microservices we're seeing developers starting to consider SOA-like deployments for core capabilities (e.g., messaging) or business logic, outside the same address space of other capabilities which would normally be co-located within the Java container. This is very much like the original CORBA architecture and it has its merits - it is definitely a deployment architecture that continues to make sense decades after it was first put into production. But microservices don't remove the need for containers, even if they're using docker containers, which is becoming a popular implementation choice. As I said in my original article on this topic, in some ways the operating system becomes your container in this deployment approach. But within these docker instance, for example, containers are still useful.

Now of course I'm not suggesting that Java containers are the answer to all use cases. There are many examples of successful applications that don't use these containers and probably wouldn't have benefited much from them. Maybe some microservices implementations won't need them. But I do believe that others will. And of course the definition of the container depends upon where you look - just because it's not as obvious as the traditional Java container doesn't mean there's not a container somewhere, e.g., your operating system. However they're implemented, containers are useful and going container-less is really not an option.