I've been so busy travelling to conferences and customer engagements that I haven't had a chance to write about my trip to HPTS 2015. I've written several times about previous trips to this workshop and how it's my favourite of them all, so won't repeat. The workshop had the usual high standard of presentations and locating it at Asilomar is always a great way to focus the mind and conversations.
Because of its highly technical nature of the workshop I always like to use this event to try out new presentations - I know the feedback I receive will be constructive and worth hearing. This time my submission was essentially about what I'd written earlier this year concerning the evolution of application servers (application containers) driven by immutability and operating system containers, such as Docker. And I threw in a smattering of microservices since the topic is obviously relevant and I figured Adrian would be there! My presentation was well received and the feedback clearly showed that many people at the event agreed with it.
One other positive thing to come from the workshop and my presentation was that my co-traveller and long time friend/colleague, Professor Shrivastava, saw the presentation for the first time at the event. He understood it and whilst much of what was said I and others take for granted, he believes that there are groups of people that would find it interesting enough that we should write a paper. Writing papers with Santosh is something I enjoy and it has been a very fruitful collaboration over the years, so I look forward to this!
I also want to thank James because it was our discussions after I started my initial entries on the evolution of application servers that helped to focus and clarify my thinking.
Monday, November 09, 2015
High Integrity Software 2015 conference
I was asked to give one of the keynotes at this year's High Integrity Software Conference and I have to say that I enjoyed the entire event. It's probably one of the best technical conferences that I've been to for a while and I've been thinking about why that was the case. I think it's partly due to the fact that it was a very focussed themed event with multiple tracks for just a small part (4 talks) of the day so everyone at the event was able to concentrate on that main theme. In many ways it was similar to how conferences and workshops were "back in the day", before many of them seemed to need to try to appeal to everyone with all of the latests hot topics at the time.
The other thing that appealed to me was that I was asked to give a talk I hadn't given before: dependability issues for open source software. The presentation is now available and it was nice to be forced to put into a presentation things I've taken for granted for so many years. The feedback from the audience was very positive and then we were straight into a panel session on open source, which was also well attended with lots of great questions. Definitely a conference I'll remember for a long time and one I hope to go back to at some point.
Finally there was one presentation that stuck in my mind. It was by Professor Philip Koopman and worth reading. There's a video of a similar presentation he did previously and despite the fact it's not great quality, I recommend watching it if you're at all interested in dependable software for mission critical environments.
The other thing that appealed to me was that I was asked to give a talk I hadn't given before: dependability issues for open source software. The presentation is now available and it was nice to be forced to put into a presentation things I've taken for granted for so many years. The feedback from the audience was very positive and then we were straight into a panel session on open source, which was also well attended with lots of great questions. Definitely a conference I'll remember for a long time and one I hope to go back to at some point.
Finally there was one presentation that stuck in my mind. It was by Professor Philip Koopman and worth reading. There's a video of a similar presentation he did previously and despite the fact it's not great quality, I recommend watching it if you're at all interested in dependable software for mission critical environments.
Tuesday, September 22, 2015
Heisenberg's back!
A long time ago (longer than I care to remember), I made the analogy between Heisenberg's Uncertainty Principle and large-scale data consistency (weak/eventual consistency). It got reported by InfoQ too. Over the weekend I came across a paper from friend/colleague Pat Helland where he made a similar analogy, so I figured I'd mention it here. What's that they say about "great minds" ;) ?
Thursday, September 10, 2015
The modern production stack
Over a year ago I wrote the first of what was supposed to be the start of a series of articles on how research our industry had been doing years (decades) ago was relevant today and even in use today, i.e., had moved from blue-sky research into reality. I never got round to updating it, despite several valiant attempts. However, thanks to James I got to see a nice article called Anatomy of a Modern Production Stack, I probably don't need to, or at least not as much as I'd expected. And before anyone points out, yes this stuff is very similar to what James, Rob and the team have been doing with Fabric8, OpenShift etc.
Sunday, June 28, 2015
Proud ...
I have two sons. They both make me proud on a daily basis. I've been away for a week at Summit and whilst there my youngest son, who's not quite 13, did something brave that made me very proud of him. I can't go into what it was except to say that he did it through the medium of social media - that annoyed me slightly but it seems to be the way of things today for a certain generation. I've told him that what he did was brave and made me proud but I figured I would also put it here so he can refer to it in the future; it's the closest I can get to social media.
Wednesday, May 20, 2015
Kids and asynchronous messaging
We've heard a lot recently about asynchronous frameworks. Typically what these really do is prevent or discourage blocking (synchronous) calls; such calls may be made, e.g., making an RPC to a remote service, but the sender thread/process either doesn't block (so doesn't actually make the call directly), or gets back some kind of token/promise that it can later present (locally or back to the receiver) in order to get the response when it needs it. Of course there's a lot more to this which I'm deliberately glossing over, but the point I'm trying to make it that most of these frameworks don't use the term "asynchronous" in the way we might understand it elsewhere. The FLP problem is a reality yet not one with which they tend to be concerned.
It's strange how the mind wanders when you're doing other things because this was precisely the problem I started to think about whilst cooking dinner the other day. With a lot of people believing that "asynchronous" is the way to go (and in many cases they're not necessarily wrong) I'm more often than not unsure as to whether they really understand the full implications. And whilst cooking I was getting frustrated with my kids, both of whom I'd tried contacting by SMS and neither of whom had responded.
So my mind wandered and I made the not-so-huge leap: I'm trying to communicate with them asynchronously. I sent each of them an SMS and I had no idea of it had arrived. Of course I could have added the SMS-delivery ack request to the messages but that can be disabled by the receiver. Therefore, until and unless they responded, I had absolutely no way to know if they'd got the message and were ignoring me, or if it had gone missing en route (SMS isn't reliable, after all). I sent more SMS messaged but with no responses I was left in the same quandary from before.
Now of course I could ring them but if they don't answer then all that does is mean I've deposited a message (voice) into some semi-persistent queue that they may or may not look at later. Another option would be to find someone who knows them and talk with them, asking for messages to be forwarded. By increasing the number of people I ask I increase the chances that one of them will eventually get through and I may receive a response (gossip protocols). However, ultimately until and unless they or a proxy (friend) for them responded (or came home) I couldn't really know directly whether or not they'd received my messages and acted upon them. Asynchronous is a bitch!
To conclude the story - they came home, had received the messages and didn't believe a response had been needed. Dinner was very tasty though!
It's strange how the mind wanders when you're doing other things because this was precisely the problem I started to think about whilst cooking dinner the other day. With a lot of people believing that "asynchronous" is the way to go (and in many cases they're not necessarily wrong) I'm more often than not unsure as to whether they really understand the full implications. And whilst cooking I was getting frustrated with my kids, both of whom I'd tried contacting by SMS and neither of whom had responded.
So my mind wandered and I made the not-so-huge leap: I'm trying to communicate with them asynchronously. I sent each of them an SMS and I had no idea of it had arrived. Of course I could have added the SMS-delivery ack request to the messages but that can be disabled by the receiver. Therefore, until and unless they responded, I had absolutely no way to know if they'd got the message and were ignoring me, or if it had gone missing en route (SMS isn't reliable, after all). I sent more SMS messaged but with no responses I was left in the same quandary from before.
Now of course I could ring them but if they don't answer then all that does is mean I've deposited a message (voice) into some semi-persistent queue that they may or may not look at later. Another option would be to find someone who knows them and talk with them, asking for messages to be forwarded. By increasing the number of people I ask I increase the chances that one of them will eventually get through and I may receive a response (gossip protocols). However, ultimately until and unless they or a proxy (friend) for them responded (or came home) I couldn't really know directly whether or not they'd received my messages and acted upon them. Asynchronous is a bitch!
To conclude the story - they came home, had received the messages and didn't believe a response had been needed. Dinner was very tasty though!
Saturday, May 16, 2015
A little more on transactions and microservices
I had some interesting discussions this week which made me want to write something more on transactions (really XA) and microservices.
Labels:
compensations,
microservices,
transactions,
XA
Sunday, May 10, 2015
Monoliths and a bit of a history lesson
Just another cross-post for those interested.
Update: one thing I forgot to make explicit but hopefully comes through in the above post and the earlier one on web-scale is that to suggest is that to suggest application servers (Java or something else) aren't sufficient for web-scale is difficult to justify since many high profile sites do run today quite well on them.
Update: one thing I forgot to make explicit but hopefully comes through in the above post and the earlier one on web-scale is that to suggest is that to suggest application servers (Java or something else) aren't sufficient for web-scale is difficult to justify since many high profile sites do run today quite well on them.
Labels:
Arjuna,
asynchronous invocations,
CORBA,
monoliths
Web-scale: I do not think it means what you think it means!
When I blogged about transactions recently one of the comments referenced "web-scale" and how application servers aren't well suited for web-scale distributed applications. I'm going to write something specific about this later but it got me to thinking: what does that actually mean? To paraphrase Inigo Montoya "You keep using that term, I do not think it means what you think it means." Many people and groups use the term but there really isn't a common (agreed) definition. It's a cop out to claim that we don't need such a definition because everyone who is sufficiently "in the know" will "know it when they see it".
The web has been around now for a long time. Over the years it has grown considerably and yet many sites back in the late 1990's would not be considered "web-scale" today. But back then they almost certainly were. Despite what many people may suggest today, sites then and now got by well using "traditional" (or dated?) technologies such as relational databases, CORBA, probably even COBOL! Coping will millions of requests, terabytes of data etc.
It's only fair that our definition of "web-scale" should change over time in much the same way as our definition of things like "personal transport" change from "horse" to "car", or even "jet pack". But we don't even have a definition of "web-scale" that we can all get behind and then evolve. It's subjective and therefore fairly useless as a way of signifying whether anything is or is not suitable for the kinds of applications or services people develop for the web. Ideally we'd fix this before anyone (person, group, company) used the term again, but I doubt that'll happen.
The web has been around now for a long time. Over the years it has grown considerably and yet many sites back in the late 1990's would not be considered "web-scale" today. But back then they almost certainly were. Despite what many people may suggest today, sites then and now got by well using "traditional" (or dated?) technologies such as relational databases, CORBA, probably even COBOL! Coping will millions of requests, terabytes of data etc.
It's only fair that our definition of "web-scale" should change over time in much the same way as our definition of things like "personal transport" change from "horse" to "car", or even "jet pack". But we don't even have a definition of "web-scale" that we can all get behind and then evolve. It's subjective and therefore fairly useless as a way of signifying whether anything is or is not suitable for the kinds of applications or services people develop for the web. Ideally we'd fix this before anyone (person, group, company) used the term again, but I doubt that'll happen.
Thursday, May 07, 2015
Slightly frustrating article ...
I came across an article the other day from last year which tried to poke holes in transactions. I kept putting off writing about it but eventually decided I had to say something. So ... here it is.
Sunday, April 26, 2015
Listening versus talking
As the song goes ... "To every thing there is a season, and a time to every purpose under the heaven". In any conversation or meeting there's a time to listen and a time to talk. Unfortunately in many environments there's an implicit peer pressure to be heard and be seen to be heard (!) even if what is then said doesn't advance the conversation. Yet often the right thing to do is listen, think and remain quiet until and unless you've got something to say which does add to the overall thread. But guess what? That can be harder to do than you might think, especially as meetings become very vocal and people verbally challenge each other to be the dominant individual.
It takes a lot of control in these situations to listen and not react, especially when there may be so many other people jostling to be heard and yet not really saying anything additive to the meeting. In fact others in such meetings may take silence as an indication of your disconnection from the conversation, or lack of understanding, which could lead you to want to say something (anything) just to prevent that kind of interpretation. But trust me ... Sometimes silence really is golden!
It takes a lot of control in these situations to listen and not react, especially when there may be so many other people jostling to be heard and yet not really saying anything additive to the meeting. In fact others in such meetings may take silence as an indication of your disconnection from the conversation, or lack of understanding, which could lead you to want to say something (anything) just to prevent that kind of interpretation. But trust me ... Sometimes silence really is golden!
SOA done right?
I said something the other day in a presentation I was giving that I immediately regretted: I was asked how to position microservices and SOA for marketing, to which I responded "Microservices are SOA done right". A nice sound bite, but the reason I regret it is that someone could infer from it that anything termed a microservice is a good SOA citizen. That's not the case. Just as not everything termed SOA or REST was following good SOA or REST principles, so too will be the case with microservices. And the lack of a formal definition of microservices, whether in terms of recognisable SOA principles or something else, doesn't help matters.
So what should I have said? In hindsight, and remembering that it had to be short and to the point, I'd probably revisit and say something along the lines of "If the service follows good SOA principles then it's probably on the way to being the basis of a microservice". Not quite as snappy a response, but probably as far as I'd feel comfortable going in a single sentence. Of course it then could lead to follow up questions such as "What's the difference between a good SOA citizen/service and a microservice?" but then we're into a technical domain and no longer marketing.
So what should I have said? In hindsight, and remembering that it had to be short and to the point, I'd probably revisit and say something along the lines of "If the service follows good SOA principles then it's probably on the way to being the basis of a microservice". Not quite as snappy a response, but probably as far as I'd feel comfortable going in a single sentence. Of course it then could lead to follow up questions such as "What's the difference between a good SOA citizen/service and a microservice?" but then we're into a technical domain and no longer marketing.
Saturday, April 25, 2015
Microservices and events
I wanted to write more on microservices (SOA) and specifically around reactive, event-orientation. We've heard a lot recently about frameworks such as Vert.x or Node.js which are reactive frameworks for Java, JavaScript and other languages (in the case of Vert.x). Developers are using them for a range of applications but because they present a reactive, event driven approach to building services and applications, it turns out they're also useful for microservices. Therefore, I wanted to give an indication of why they are useful for microservices and when I started to put proverbial pen to paper I found I was repeating a lot of what I'd written about before and specifically when I discussed some of the core reasons behind the future adaptive middleware platform.
Now I'm not suggesting that Vert.x, Node.js or any reactive, event-oriented framework or platform fulfills everything I mentioned in that earlier entry because I don't think they do, at least not today. However, they do represent a good basis from which to build. Of course that doesn't preclude existing architectures and associated software stacks from being used to build microservices. In fact I think a combination of these new approaches and existing mature stacks are the right way to go; with each new generation we learn the mistakes of the past and their successes, building upon the latter and hopefully not reproducing the former.
Unfortunately I'm not going to endorse the reactive manifesto, because I'm not entirely convinced that what's encompassed there really represents much more than good fault tolerance and message-driven architecture. For instance, responsive? I'd like to think that any useful platform should be responsive! I think we should add that to the "bleeding obvious" category. My definition of reactive would include fault tolerant, scalable, reliable, message-oriented (supporting synchronous and asynchronous), self-healing (resiliency is only part of that) and event-driven.
Therefore, when I say that reactive, event-oriented is a good fit for microservices (remember ... SOA) I'm talking about something slightly different than reactive as defined within the manifesto. It's all that and more. We've come a long way since the 1970's and 80's when things were typically synchronous and RPCs were the main paradigm. Even those are event driven if you think about it. But since then synchronous interactions fell out of favour, asynchronous rose in interest and so did event-orientation and associated efforts like CEP. Hopefully given what I wrote in that earlier article it should be fairly obvious what kind of platform characteristics I believe are necessary in the future - microservices is just a good approach that can take advantage of it.
As I said earlier though, I don't believe any implementation today really embodies where we need to be. There's a lot we can leverage from the past decades of enterprise software development but there is more work to be done. For instance, we hear a lot about asynchronous systems where in reality what most people are talking about is synchronous interactions performed by a separate thread to the main application (business logic) thread of control; there are known time bounds associated with work or interactions, even in a distributed environment. If we really want to talk about true asynchronous interactions, and I believe we do, then we are looking at unbounded interactions, where you simply do not know when a message is overdue. That causes a number of problems, including failure detection/suspicion (timeouts are a favourite way of trying to determine when a machine may have failed and now you can't use time as a factor), and distributed consensus, which has been proven to be unsolvable in an asynchronous environment.
Of course these may seem like purely theoretical or academic problems. But think about the kinds of environments we have today and which are growing in applicability: cloud, mobile, internet of things with sensors, gateways etc. These are inherently asynchronous, with unpredictable interaction patterns and message exchanges. Whilst synchronous or multi-threaded asynchronous may be a good stop-gap approach, it will inevitably present scalability and resiliency problems. True asynchronous, with all of the problems it represents, is the ultimate goal. But we're not there yet and it's going to take time as well as recognition by developers that further research and development is a necessity.
Now I'm not suggesting that Vert.x, Node.js or any reactive, event-oriented framework or platform fulfills everything I mentioned in that earlier entry because I don't think they do, at least not today. However, they do represent a good basis from which to build. Of course that doesn't preclude existing architectures and associated software stacks from being used to build microservices. In fact I think a combination of these new approaches and existing mature stacks are the right way to go; with each new generation we learn the mistakes of the past and their successes, building upon the latter and hopefully not reproducing the former.
Unfortunately I'm not going to endorse the reactive manifesto, because I'm not entirely convinced that what's encompassed there really represents much more than good fault tolerance and message-driven architecture. For instance, responsive? I'd like to think that any useful platform should be responsive! I think we should add that to the "bleeding obvious" category. My definition of reactive would include fault tolerant, scalable, reliable, message-oriented (supporting synchronous and asynchronous), self-healing (resiliency is only part of that) and event-driven.
Therefore, when I say that reactive, event-oriented is a good fit for microservices (remember ... SOA) I'm talking about something slightly different than reactive as defined within the manifesto. It's all that and more. We've come a long way since the 1970's and 80's when things were typically synchronous and RPCs were the main paradigm. Even those are event driven if you think about it. But since then synchronous interactions fell out of favour, asynchronous rose in interest and so did event-orientation and associated efforts like CEP. Hopefully given what I wrote in that earlier article it should be fairly obvious what kind of platform characteristics I believe are necessary in the future - microservices is just a good approach that can take advantage of it.
As I said earlier though, I don't believe any implementation today really embodies where we need to be. There's a lot we can leverage from the past decades of enterprise software development but there is more work to be done. For instance, we hear a lot about asynchronous systems where in reality what most people are talking about is synchronous interactions performed by a separate thread to the main application (business logic) thread of control; there are known time bounds associated with work or interactions, even in a distributed environment. If we really want to talk about true asynchronous interactions, and I believe we do, then we are looking at unbounded interactions, where you simply do not know when a message is overdue. That causes a number of problems, including failure detection/suspicion (timeouts are a favourite way of trying to determine when a machine may have failed and now you can't use time as a factor), and distributed consensus, which has been proven to be unsolvable in an asynchronous environment.
Of course these may seem like purely theoretical or academic problems. But think about the kinds of environments we have today and which are growing in applicability: cloud, mobile, internet of things with sensors, gateways etc. These are inherently asynchronous, with unpredictable interaction patterns and message exchanges. Whilst synchronous or multi-threaded asynchronous may be a good stop-gap approach, it will inevitably present scalability and resiliency problems. True asynchronous, with all of the problems it represents, is the ultimate goal. But we're not there yet and it's going to take time as well as recognition by developers that further research and development is a necessity.
Labels:
asynchronous,
event driven,
FLP,
microservices,
reactive
Tuesday, April 21, 2015
Theory of Everything
These days I watch more movies on a plane than anywhere else. One of those I saw yesterday on a trip to Boston was The Theory of Everything. Stephen Hawking has been a hero of mine from before I started at university doing my physics degree. I read a number of his papers and about him before I read his seminal book A Brief History Of Time. Several times.
Despite moving more into computing over the years I've continued to track physics, with books and papers by the likes of Penrose, Hawking and others, but obviously time gets in the way, which if you've read some of the same could be said to be quite ironic! Anyway, when I heard they were doing a film of Hawking's life I was a little nervous and decided it probably wasn't going to be something I'd watch. However, stuck on a 7 hour flight with not much else to do (laptop couldn't function in cattle-class due to person in front reclining) I decided to give it a go.
It's brilliant! Worth watching several times. The acting, the dialogue, the story, all come together. There's a sufficient mid of the human and physics to make to appeal to a wide audience. A singular film! (Sorry, a really bad pun!)
I also managed to watch The Imitation Game, which I think is probably just as good but for other reasons.
Despite moving more into computing over the years I've continued to track physics, with books and papers by the likes of Penrose, Hawking and others, but obviously time gets in the way, which if you've read some of the same could be said to be quite ironic! Anyway, when I heard they were doing a film of Hawking's life I was a little nervous and decided it probably wasn't going to be something I'd watch. However, stuck on a 7 hour flight with not much else to do (laptop couldn't function in cattle-class due to person in front reclining) I decided to give it a go.
It's brilliant! Worth watching several times. The acting, the dialogue, the story, all come together. There's a sufficient mid of the human and physics to make to appeal to a wide audience. A singular film! (Sorry, a really bad pun!)
I also managed to watch The Imitation Game, which I think is probably just as good but for other reasons.
Saturday, April 11, 2015
Transactions and microservices
Sometimes I think I have too many blogs to cover with articles. However, this time what I was writing about really did belong more in the JBoss Transactions team blog, so if you're interested in my thoughts on where transactions (ACID, compensation etc.) fit into the world of microservices, then check it out.
Labels:
docker,
microservices,
REST,
SOA,
transactions
Thursday, April 09, 2015
Containerless Microservices?
A few weeks ago I wrote about containerless development within Java. I said at the time that I don't really believe there's ever the absence of a container in one form or another, e.g., your operating system is a container. Then I wrote about containers and microservices. Different containers from those we're used to in the Java world, but definitely something more and more Java developers are becoming interested in. However, what I wanted to make clear here is that in no way are microservices (aka SOA) tied to (Linux) containers, whether Docker-based or some other variant.
Yes, those containers can make your life easier, particularly if you're in that DevOps mode and working in the Cloud. But just as not all software needs to run in the Cloud, neither do all services (whether micro or macro) need to be deployed into a (Linux) container. What I'm going to say will be in terms of Java, but it's really language agnostic: if you're looking to develop a service within Java then you already have great tools and 3rd party components to help you get there. The JVM is a wonderful container for all of your services that has had many years of skilled development within it to help make it reliable and perform. It also runs on a pretty much every operating system you can mention today and hardware ranging from constrained devices such as the Raspberry Pi, up to and including mainframes.
In the Java world we have a rich heritage of integration tools, such as Apache Camel and HornetQ. The community has built some of the best SOA tools and frameworks anywhere, so of course Java should be one of the top languages in which to develop microservices. If you're thinking about developing microservices in Java then you don't have to worry about using Linux containers: your unit of failure is the JVM. Start there and built upward. Don't throw in things (software components) that aren't strictly necessary. Remember what Einstein said: "Everything should be made as simple as possible, but not simpler."
However, as I said earlier, the Linux container can help you once you've got your microservice(s) up and running. Developing locally on your laptop, say, is unlikely to be the place where you want to start with something like Docker, especially if you're just learning how to create microservices in the first place. (Keep the number of variables to a minimum). But once you've got the service tested and working, Docker and its like represent a great way of packaging them up and deploying them elsewhere.
Yes, those containers can make your life easier, particularly if you're in that DevOps mode and working in the Cloud. But just as not all software needs to run in the Cloud, neither do all services (whether micro or macro) need to be deployed into a (Linux) container. What I'm going to say will be in terms of Java, but it's really language agnostic: if you're looking to develop a service within Java then you already have great tools and 3rd party components to help you get there. The JVM is a wonderful container for all of your services that has had many years of skilled development within it to help make it reliable and perform. It also runs on a pretty much every operating system you can mention today and hardware ranging from constrained devices such as the Raspberry Pi, up to and including mainframes.
In the Java world we have a rich heritage of integration tools, such as Apache Camel and HornetQ. The community has built some of the best SOA tools and frameworks anywhere, so of course Java should be one of the top languages in which to develop microservices. If you're thinking about developing microservices in Java then you don't have to worry about using Linux containers: your unit of failure is the JVM. Start there and built upward. Don't throw in things (software components) that aren't strictly necessary. Remember what Einstein said: "Everything should be made as simple as possible, but not simpler."
However, as I said earlier, the Linux container can help you once you've got your microservice(s) up and running. Developing locally on your laptop, say, is unlikely to be the place where you want to start with something like Docker, especially if you're just learning how to create microservices in the first place. (Keep the number of variables to a minimum). But once you've got the service tested and working, Docker and its like represent a great way of packaging them up and deploying them elsewhere.
Monday, April 06, 2015
Microservices and state
In a previous entry I was talking about the natural unit of failure for a microservice(s) being the container (Docker or some other implementation). I touched briefly on state but only to say that we should assume stateless instances for the purpose of that article and we'd come back to state later. Well it's later and I've a few thoughts on the topic. First it's worth noting that if the container is the unit of failure, such that all servers within an image fail together, then it can also be the unit of replication. Again let's ignore durable state for now but it still makes sense to spin up multiple instances of the same image to handle load and provide fault tolerance (increased availability). In fact this is how cluster technologies such as Kubernetes work.
I've spent a lot of time over the years working in the areas of fault tolerance, replication and transactions; it doesn't matter whether we're talking about replicating objects, services, or containers, the principles are the same. This got me to thinking that something I wrote over 22 years ago might have some applicability today. Back then we were looking at distributed objects and strongly consistent replication using transactions. Work on weakly consistent replication protocols was in its infancy and despite the fact that today everyone seems to be offering them in one form or another and trying to use them within their applications, their general applicability is not as wide as you might believe; and pushing the problem of resolving replica inconsistencies up to the application developer isn't the best thing to do! However, once again this is getting off topic a bit and perhaps something else I'll come back to in a different article. For now let's assume if there are replicas then their states will be in sync (that doesn't require transactions, but they're a good approach).
In order to support a wide range of replication strategies, ranging from primary-copy (passive) through to available copies (active), we created a system whereby the object methods (code) was replicated separately from the object state. In this way the binaries representing the code was immutable and when activated they'd read the state from elsewhere; in fact because the methods could be replicated to different degress from the state, it was possible for multiple replicated methods (servers) to read their state from an unreplicated object state (store). I won't spoil the plot too much so the interested reader can take a look at the original material. However, I will add that there was a mechanism for naming and locating these various server and state replicas; we also investigated how you could place the replicas (dynamically) on various machines to obtain a desired level of availability, since availability isn't necessarily proportional to the number of replicas you've got.
If you're familiar with Kubernetes (other clustering implementations are possible) then hopefully this sounds familiar. There's a strong equivalency between the components and approaches that Kubernetes uses and what we had in 1993; of course other groups and systems are also similar and for good reasons - there are some fundamental requirements that must be met. Let's get back to how this all fits in with microservices, if that wasn't already obvious. As before, I'll talk about Kubernetes but if you're looking at some other implementation it should be possible to do a mental substitution.
Kubernetes assumes that the images it can spin up as replicas are immutable and identical, so it can pull an instance from any repository and place it on any node (machine) without having to worry about inconsistencies between replicas. Docker doesn't prevent you making changes to the state within a specific image but this results in a different image instance. Therefore, if your microservice(s) within an image maintain their state locally (within the image), you would have to ensure that this new image instance was replicated in the repositories that something like Kubernetes has access to when it creates the clusters of your microservices. That's not an impossible task, of course, but it does present some challenges, including how to distribute the updated image amongst the repositories in a timely manner - you wouldn't want a strongly consistent cluster to be created with different versions of the image because that means different states and hence not consistent, and how to ensure that state changes that happen at each Docker instance and result in a new image being created are in lock-step - one of the downsides of active replication is that it assumes determinism for the replica, i.e., given the same start state and the same set of messages in the same order, the same end state will result; not always possible if you have non-deterministic elements in your code, such as the time of day. There are a number of ways in which you can ensure consistency of state, but we're not just talking about the state of your service now, it's also got to include the entire Docker image.
Therefore, overall it can be a lot simpler to factor the binary that implements your algorithms for your microservices (aka the Docker image or 1993 object) from the state and consider the images within which the microservices reside to be immutable; any state changes that do occur must be saved (made durable) "off image" or be lost when the image is passivated, which could be fine for your services of course, if there's no need for state durability. Of course if you're using active replication then you still have to worry about determinism, but we're only considering state here and not the actual entire Docker image binary too. The way in which the states are kept consistent is covered by a range of protocols, which are well documented in the literature. Where the state is actually saved (the state store, object store, or whatever you want to call it) will depend upon your requirements for the microservice. There are the usual suspects, such as RDBMS, file system, NoSQL store, or even highly available (replicated) in memory data stores which have no persistent backup and rely upon the slim chance that a catastrophic failure will occur to wipe out all of the replicas (even persistent stores have a finite probability that they'll fail and you will lose your data). And of course the RDBMS, file system etc. should be replicated or you'll be putting all of your eggs in the same basket!
One final note (for now): so far we've been making the implicit assumption that each Docker image that contains your microservices in a cluster is identical and immutable. What if we relaxed the identical aspect slightly and allowed different implementations of the same service, written by different teams and potentially in different languages? Of course for simplicitly we should assume that these implementations can all read and write the same state (though even that limitation could be relaxed with sufficient thought). Each microservice in an image could be performing the same task, written against the same algorithm, but with the hopes that bugs or inaccuracies produced by one team were not replicated by others, i.e., this is n-versioning programming. Because these Docker images contain microservices that can deal with each other's states, all we have to do is ensure that Kubernetes (for instance) spins up sufficient versions of these heterogeneous images to give us a desired level of availability in the event of coding errors. That shouldn't be too hard to do since it's something research groups were doing back in the 1990's.
I've spent a lot of time over the years working in the areas of fault tolerance, replication and transactions; it doesn't matter whether we're talking about replicating objects, services, or containers, the principles are the same. This got me to thinking that something I wrote over 22 years ago might have some applicability today. Back then we were looking at distributed objects and strongly consistent replication using transactions. Work on weakly consistent replication protocols was in its infancy and despite the fact that today everyone seems to be offering them in one form or another and trying to use them within their applications, their general applicability is not as wide as you might believe; and pushing the problem of resolving replica inconsistencies up to the application developer isn't the best thing to do! However, once again this is getting off topic a bit and perhaps something else I'll come back to in a different article. For now let's assume if there are replicas then their states will be in sync (that doesn't require transactions, but they're a good approach).
In order to support a wide range of replication strategies, ranging from primary-copy (passive) through to available copies (active), we created a system whereby the object methods (code) was replicated separately from the object state. In this way the binaries representing the code was immutable and when activated they'd read the state from elsewhere; in fact because the methods could be replicated to different degress from the state, it was possible for multiple replicated methods (servers) to read their state from an unreplicated object state (store). I won't spoil the plot too much so the interested reader can take a look at the original material. However, I will add that there was a mechanism for naming and locating these various server and state replicas; we also investigated how you could place the replicas (dynamically) on various machines to obtain a desired level of availability, since availability isn't necessarily proportional to the number of replicas you've got.
If you're familiar with Kubernetes (other clustering implementations are possible) then hopefully this sounds familiar. There's a strong equivalency between the components and approaches that Kubernetes uses and what we had in 1993; of course other groups and systems are also similar and for good reasons - there are some fundamental requirements that must be met. Let's get back to how this all fits in with microservices, if that wasn't already obvious. As before, I'll talk about Kubernetes but if you're looking at some other implementation it should be possible to do a mental substitution.
Kubernetes assumes that the images it can spin up as replicas are immutable and identical, so it can pull an instance from any repository and place it on any node (machine) without having to worry about inconsistencies between replicas. Docker doesn't prevent you making changes to the state within a specific image but this results in a different image instance. Therefore, if your microservice(s) within an image maintain their state locally (within the image), you would have to ensure that this new image instance was replicated in the repositories that something like Kubernetes has access to when it creates the clusters of your microservices. That's not an impossible task, of course, but it does present some challenges, including how to distribute the updated image amongst the repositories in a timely manner - you wouldn't want a strongly consistent cluster to be created with different versions of the image because that means different states and hence not consistent, and how to ensure that state changes that happen at each Docker instance and result in a new image being created are in lock-step - one of the downsides of active replication is that it assumes determinism for the replica, i.e., given the same start state and the same set of messages in the same order, the same end state will result; not always possible if you have non-deterministic elements in your code, such as the time of day. There are a number of ways in which you can ensure consistency of state, but we're not just talking about the state of your service now, it's also got to include the entire Docker image.
Therefore, overall it can be a lot simpler to factor the binary that implements your algorithms for your microservices (aka the Docker image or 1993 object) from the state and consider the images within which the microservices reside to be immutable; any state changes that do occur must be saved (made durable) "off image" or be lost when the image is passivated, which could be fine for your services of course, if there's no need for state durability. Of course if you're using active replication then you still have to worry about determinism, but we're only considering state here and not the actual entire Docker image binary too. The way in which the states are kept consistent is covered by a range of protocols, which are well documented in the literature. Where the state is actually saved (the state store, object store, or whatever you want to call it) will depend upon your requirements for the microservice. There are the usual suspects, such as RDBMS, file system, NoSQL store, or even highly available (replicated) in memory data stores which have no persistent backup and rely upon the slim chance that a catastrophic failure will occur to wipe out all of the replicas (even persistent stores have a finite probability that they'll fail and you will lose your data). And of course the RDBMS, file system etc. should be replicated or you'll be putting all of your eggs in the same basket!
One final note (for now): so far we've been making the implicit assumption that each Docker image that contains your microservices in a cluster is identical and immutable. What if we relaxed the identical aspect slightly and allowed different implementations of the same service, written by different teams and potentially in different languages? Of course for simplicitly we should assume that these implementations can all read and write the same state (though even that limitation could be relaxed with sufficient thought). Each microservice in an image could be performing the same task, written against the same algorithm, but with the hopes that bugs or inaccuracies produced by one team were not replicated by others, i.e., this is n-versioning programming. Because these Docker images contain microservices that can deal with each other's states, all we have to do is ensure that Kubernetes (for instance) spins up sufficient versions of these heterogeneous images to give us a desired level of availability in the event of coding errors. That shouldn't be too hard to do since it's something research groups were doing back in the 1990's.
Saturday, April 04, 2015
Microservices and the unit of failure
I've seen and heard people fixating on the "micro" bit of "microservices". Some people believe that a microservice should be no larger than "a few lines of code" or a "few megabytes" and there has been at least one discussion about nanoservices! I don't think we should fixate on the size but rather that old Unix addage from Doug McIlroy: "write programs that do one thing and do it well". Replace "programs" with "service". It doesn't matter if that takes 100 lines of code or 1000 (or more or less).
As I've said several times, I think the principles behind microservices aren't that far removed from "traditional" SOA, but what is driving the former is a significant change in the way we develop and deploy applications, aka DevOps, or even NoOps if you're tracking Netflix and others. Hand in hand with these changes come new processes, tools, frameworks and other software components, many of which are rapidly becoming part of the microservices toolkit. In some ways it's good to see SOA evolve in this way and we need to make sure we don't forget all of the good practices that we've learnt over the years - but that's a different issue.
Anyway, chief amongst those tools is the rapid evolution of container technolgoies, such as Docker (other implementations are available, of course!) For simplicity I'll talk about Docker in the rest of this article, but if you're using something else then you should be able to do a global substitution and have the same result. Docker is great at creating stable deployment instances for pretty much anything (as long as it runs in Linux, at the moment). For instance, you can distribute your product or project as a Docker image and the user can be sure it'll work as you intended because you went to the effort to ensure that any third party dependencies were taken care of at the point you built it; so even if that version of Foobar no longer exists in the world, if you had it and needed it when you built your image then that image will run just fine.
So it should be fairly obvious why container images, such as those based on Docker, make good deployment mechanisms for (micro) services. In just the same way as technologies such as OSGi did (and still do), you can package up your service and be sure it will run first time. But if you've ever looked at a Docker image you'll know that they're not exactly small; depending upon what's in them, they can range from 100s of megabytes of gigabytes in size. Now of course if you're creating microservices and are focussing on the size of the service, then you could be worried about this. However, as I mentioned before, I don't think size is the right metric on which to base the conclusion of whether a service fits into the "microservice" category. Furthermore, you've got to realise that there's a lot more in that image than the service you created, which could in fact be only a few 100s of lines of code: you've got the entire operating system, for a start!
Finally there's one very important reason why I think that despite the size of Docker images being rather large, you should still consider them for your (micro) service deployments: they make a great unit of failure. We rarely build and deploy a single service when creating applications. Typically an application will be built from a range of services, some built by different teams. These services will have differing levels of availability and reliability. They'll also have different levels of dependency between one another. Crucially there will be groupings of services which should fail together, or at least if one of them fails the others may as well fail because they can't be useful to the application (clients or other services) until the failed service has recovered.
In previous decades, and even today, we've looked at middleware systems that would automatically deploy related services on to the same machine and, where possible, into the same process instance, such that the failure of the process or machine would fail the unit. Furthermore, if you didn't know or understand these interdependencies a priori, some implementations could dynamically track them and migrate services closer to each other and maybe even on to the same machine eventually. Now this kind of dynamism is still useful in some environments, but with containers such as Docker you can now create those units of failures from the start. If you are building multiple microservices, or using them from other groups and organisations, within your applications or composite service(s), then do some thinking about how they are related and if they should fail as a unit then pull them together into a single image.
Note I haven't said anything about state here. Where is state stored? How does it remain consistent across failures? I'm assuming statelessness at the moment, so technologies such as Kubernetes can manage the failure and recovery of immutable (Docker) images. Once you inject state then some things may change, but let's cover that at another date and time.
As I've said several times, I think the principles behind microservices aren't that far removed from "traditional" SOA, but what is driving the former is a significant change in the way we develop and deploy applications, aka DevOps, or even NoOps if you're tracking Netflix and others. Hand in hand with these changes come new processes, tools, frameworks and other software components, many of which are rapidly becoming part of the microservices toolkit. In some ways it's good to see SOA evolve in this way and we need to make sure we don't forget all of the good practices that we've learnt over the years - but that's a different issue.
Anyway, chief amongst those tools is the rapid evolution of container technolgoies, such as Docker (other implementations are available, of course!) For simplicity I'll talk about Docker in the rest of this article, but if you're using something else then you should be able to do a global substitution and have the same result. Docker is great at creating stable deployment instances for pretty much anything (as long as it runs in Linux, at the moment). For instance, you can distribute your product or project as a Docker image and the user can be sure it'll work as you intended because you went to the effort to ensure that any third party dependencies were taken care of at the point you built it; so even if that version of Foobar no longer exists in the world, if you had it and needed it when you built your image then that image will run just fine.
So it should be fairly obvious why container images, such as those based on Docker, make good deployment mechanisms for (micro) services. In just the same way as technologies such as OSGi did (and still do), you can package up your service and be sure it will run first time. But if you've ever looked at a Docker image you'll know that they're not exactly small; depending upon what's in them, they can range from 100s of megabytes of gigabytes in size. Now of course if you're creating microservices and are focussing on the size of the service, then you could be worried about this. However, as I mentioned before, I don't think size is the right metric on which to base the conclusion of whether a service fits into the "microservice" category. Furthermore, you've got to realise that there's a lot more in that image than the service you created, which could in fact be only a few 100s of lines of code: you've got the entire operating system, for a start!
Finally there's one very important reason why I think that despite the size of Docker images being rather large, you should still consider them for your (micro) service deployments: they make a great unit of failure. We rarely build and deploy a single service when creating applications. Typically an application will be built from a range of services, some built by different teams. These services will have differing levels of availability and reliability. They'll also have different levels of dependency between one another. Crucially there will be groupings of services which should fail together, or at least if one of them fails the others may as well fail because they can't be useful to the application (clients or other services) until the failed service has recovered.
In previous decades, and even today, we've looked at middleware systems that would automatically deploy related services on to the same machine and, where possible, into the same process instance, such that the failure of the process or machine would fail the unit. Furthermore, if you didn't know or understand these interdependencies a priori, some implementations could dynamically track them and migrate services closer to each other and maybe even on to the same machine eventually. Now this kind of dynamism is still useful in some environments, but with containers such as Docker you can now create those units of failures from the start. If you are building multiple microservices, or using them from other groups and organisations, within your applications or composite service(s), then do some thinking about how they are related and if they should fail as a unit then pull them together into a single image.
Note I haven't said anything about state here. Where is state stored? How does it remain consistent across failures? I'm assuming statelessness at the moment, so technologies such as Kubernetes can manage the failure and recovery of immutable (Docker) images. Once you inject state then some things may change, but let's cover that at another date and time.
Labels:
docker,
failure,
fault tolerance,
microservices,
recovery
Saturday, March 21, 2015
Six Years as JBoss CTO
It's coming up to 6 years since Sacha asked me to take over from him as CTO. It was a privilege and it remains so today. Despite the facts that Arjuna was involved with JBoss for many years prior to my officially joining in 2005 and that my good friend/colleague Bob Bickel was a key member of JBoss, I didn't join JBoss officially until 2005. I found it to be a great company and a lot more than I ever expected! A decade on and now part of Red Hat, it remains an exciting place to work. I wake up each morning wondering what's in store for me as CTO and I still love it! Thanks Marc, Sacha and Bob!
Sunday, March 15, 2015
Restrospecting: Sun slowly setting
I remember writing this entry a few years back (2010), but it appears I forgot to hit submit. Therefore, in the interests in completeness I'm going to post it even though it's 5 years old!
--
So the deal is done and Sun Microsystems is no more and I'm filled with mixed emotions. Let's ignore my role within Red Hat/JBoss for now, as there are certainly some emotions tied up in that. It's sad to see Sun go and yet I'm pleased they haven't gone bust. I have a long history with Sun as a user and, through Arjuna, as a prospective partner. When I started my PhD back in the mid 1980's my first computer was a Whitechapel, the UK equivalent to the Sun 360 at the time. But within the Arjuna project the Sun workstation was king and to have one was certainly a status symbol. I remember each year when the new Sun catalogue came out or we had a new grant, we'd all look around for the next shiney new machine from Sun. We had 380s, Sparc, UltraSparc and others all the way through the 1990's (moving from SunOS through to the Solaris years). The Sun workstation and associated software was what you asipired to get, either directly or when someone left the project! In fact the Arjuna project was renowned within the Computing Department for always having the latest and greatest Sun kit.
The lustre started to dim in the 1990's when Linus put out the first Linux distribution and we got Pentium 133s for a research grant into distributed/parallel computing (what today people might call Grid or Cloud). When a P133 running Linux was faster than the latest Sun we knew the writing was on the wall. By the end of the decade most Sun equipment we had was at least 5 years old and there were no signs of it being replaced by more Sun machines.
Throughout those years we were also in touch with Sun around a variety of topics. For instance we talked with Jim Waldo about distributed systems and transactions, trying to persuade them not to develop their own transction implementation for Jini but to use ours. We also got the very first Spring operating system drop along with associated papers. We had invitations to speak at various Sun sites as well as the usual job offers. Once again, in those days Sun was the cool place to be seen and work.
But that all started to change as the hardware dominance waned. It was soon after Java came along, though I think that is coincidental. Although Solaris was still the best Unix variant around, Linux and NetBSD were good enough, at least in academia. Plus they were a heck of a lot cheaper and offered easier routes to do some interesting research and development, e.g., we started to look at reworking the Newcastle Connection in Linux, which would have been extremely difficult to do within Solaris.
Looking back I can safely say that I owe a lot to Sun. They were the best hardware and OS vendor in the late 1980's and early 1990's, providing me and others in our project with a great base on which to develop Arjuna and do our PhDs. In those days before Java came along they were already the de facto standard for academic research and develpment, at least with the international communities with which we worked. I came to X11, Interviews, network programming, C++ etc. all through Sun, and eventually Java of course. So as the Sun sinks slowly in the west I have to say a thanks to Sun for 20 years of pleasant memories.
--
So the deal is done and Sun Microsystems is no more and I'm filled with mixed emotions. Let's ignore my role within Red Hat/JBoss for now, as there are certainly some emotions tied up in that. It's sad to see Sun go and yet I'm pleased they haven't gone bust. I have a long history with Sun as a user and, through Arjuna, as a prospective partner. When I started my PhD back in the mid 1980's my first computer was a Whitechapel, the UK equivalent to the Sun 360 at the time. But within the Arjuna project the Sun workstation was king and to have one was certainly a status symbol. I remember each year when the new Sun catalogue came out or we had a new grant, we'd all look around for the next shiney new machine from Sun. We had 380s, Sparc, UltraSparc and others all the way through the 1990's (moving from SunOS through to the Solaris years). The Sun workstation and associated software was what you asipired to get, either directly or when someone left the project! In fact the Arjuna project was renowned within the Computing Department for always having the latest and greatest Sun kit.
The lustre started to dim in the 1990's when Linus put out the first Linux distribution and we got Pentium 133s for a research grant into distributed/parallel computing (what today people might call Grid or Cloud). When a P133 running Linux was faster than the latest Sun we knew the writing was on the wall. By the end of the decade most Sun equipment we had was at least 5 years old and there were no signs of it being replaced by more Sun machines.
Throughout those years we were also in touch with Sun around a variety of topics. For instance we talked with Jim Waldo about distributed systems and transactions, trying to persuade them not to develop their own transction implementation for Jini but to use ours. We also got the very first Spring operating system drop along with associated papers. We had invitations to speak at various Sun sites as well as the usual job offers. Once again, in those days Sun was the cool place to be seen and work.
But that all started to change as the hardware dominance waned. It was soon after Java came along, though I think that is coincidental. Although Solaris was still the best Unix variant around, Linux and NetBSD were good enough, at least in academia. Plus they were a heck of a lot cheaper and offered easier routes to do some interesting research and development, e.g., we started to look at reworking the Newcastle Connection in Linux, which would have been extremely difficult to do within Solaris.
Looking back I can safely say that I owe a lot to Sun. They were the best hardware and OS vendor in the late 1980's and early 1990's, providing me and others in our project with a great base on which to develop Arjuna and do our PhDs. In those days before Java came along they were already the de facto standard for academic research and develpment, at least with the international communities with which we worked. I came to X11, Interviews, network programming, C++ etc. all through Sun, and eventually Java of course. So as the Sun sinks slowly in the west I have to say a thanks to Sun for 20 years of pleasant memories.
Chromebook?
I travel a lot and until a couple of years ago I lugged around a 17" laptop. It was heavy, but it was also the only machine I used and I don't use an external monitor at home, just the office, so I needed the desktop space. But it was heavy and almost unusable when on a plane, especially if the person in front decided to recline their seat! Try coding when you can't see the entire screen!
In the past I'd tried to have a second smaller laptop for travelling, but that didn't work for me. Essentially the problem was once of synchronisation: making sure my files (docs, source, email etc.) which mainly resided on my 17" (main) machine were copied across to the smaller (typically 13") machine and back again once I was home. Obviously it's possible to do - I did it for a couple of years. But it's a PITA, even when automated. So eventually I gave up and went back to having just one machine.
Enter the tablet age. I decided to try to use a tablet for doing things when on a plane, but still just have a single machine and travel with it. That worked, but it was still limiting, not least because I don't like typing quickly on a touch screen - too many mistakes. Then I considered getting a keyboard for the tablet and suddenly thought about a Chromebook, which was fortunate because I happened to have one that was languishing unused on a shelf.
Originally I wasn't so convinced about the Chromebook. I'd lived through the JavaStation era and we had one in the office - one of the very first ever in the UK. Despite being heavy users of Java, the JavaStation idea was never appealing to me and it didn't take off despite a lot of marketing from Sun. There were a number of problems, not least of which were the lack of applications and the fact that the networks at the time really weren't up to the job. However, today things are very different - I use Google docs a lot from my main development machine as well as from my phone.
Therefore, although the idea of using a Chromebook hadn't appealed initially, it started to grown on me. What also helped push me over the tipping point was that I grew more and more disinterested in my tablet(s). I don't use them for playing games or social media; typically they are (were) used for email or editing documents, both of which I can do far better (for me at least) on a Chromebook.
So I decided that I could probably get away with a Chromebook on a plane, especially since it has offline capabilities - not many planes have wifi yet. But then I began to think that perhaps for short trips I could get away with it for everything, i.e., leave the main machine at home. Despite the fact I don't get a lot of time to spend coding, I still do it as much as I can. But for short trips (two or three days) I tend to only be working on documents or reading email. If I didn't take my main machine I wouldn't be able to code, but it wouldn't be such a problem. I also wouldn't need to sync code or email between the Chromebook and my main machine.
Therefore, I decided to give it a go earlier this year. I'd used the Chromebook at home quite extensively but never away from there. This meant there were a few teething problems, such as ensuring the work VPN worked smoothly and installing a few more applications that allowed for offline editing or video watching instead of real-time streaming. It wasn't a perfect first effort, but I think it was successful enough for me to give it another go next time I travel and know I won't need to code. I've also pretty much replaced my tablets with the Chromebook.
In the past I'd tried to have a second smaller laptop for travelling, but that didn't work for me. Essentially the problem was once of synchronisation: making sure my files (docs, source, email etc.) which mainly resided on my 17" (main) machine were copied across to the smaller (typically 13") machine and back again once I was home. Obviously it's possible to do - I did it for a couple of years. But it's a PITA, even when automated. So eventually I gave up and went back to having just one machine.
Enter the tablet age. I decided to try to use a tablet for doing things when on a plane, but still just have a single machine and travel with it. That worked, but it was still limiting, not least because I don't like typing quickly on a touch screen - too many mistakes. Then I considered getting a keyboard for the tablet and suddenly thought about a Chromebook, which was fortunate because I happened to have one that was languishing unused on a shelf.
Originally I wasn't so convinced about the Chromebook. I'd lived through the JavaStation era and we had one in the office - one of the very first ever in the UK. Despite being heavy users of Java, the JavaStation idea was never appealing to me and it didn't take off despite a lot of marketing from Sun. There were a number of problems, not least of which were the lack of applications and the fact that the networks at the time really weren't up to the job. However, today things are very different - I use Google docs a lot from my main development machine as well as from my phone.
Therefore, although the idea of using a Chromebook hadn't appealed initially, it started to grown on me. What also helped push me over the tipping point was that I grew more and more disinterested in my tablet(s). I don't use them for playing games or social media; typically they are (were) used for email or editing documents, both of which I can do far better (for me at least) on a Chromebook.
So I decided that I could probably get away with a Chromebook on a plane, especially since it has offline capabilities - not many planes have wifi yet. But then I began to think that perhaps for short trips I could get away with it for everything, i.e., leave the main machine at home. Despite the fact I don't get a lot of time to spend coding, I still do it as much as I can. But for short trips (two or three days) I tend to only be working on documents or reading email. If I didn't take my main machine I wouldn't be able to code, but it wouldn't be such a problem. I also wouldn't need to sync code or email between the Chromebook and my main machine.
Therefore, I decided to give it a go earlier this year. I'd used the Chromebook at home quite extensively but never away from there. This meant there were a few teething problems, such as ensuring the work VPN worked smoothly and installing a few more applications that allowed for offline editing or video watching instead of real-time streaming. It wasn't a perfect first effort, but I think it was successful enough for me to give it another go next time I travel and know I won't need to code. I've also pretty much replaced my tablets with the Chromebook.
Saturday, March 07, 2015
More thoughts on container-less development
It's time to get back to the concept of container-less development. I gave an outline of my thinking a while back, but let's remember that I'm not talking about containers such as docker or rocket, though they do have an impact on the kinds of containers I am concerned about: those typically associated with application servers and specifically Java application servers. Over the years the Java community has come to associate containers with monolithic J2EE or Java EE application servers, providing some useful capabilities but often in a way which isn't natural for the majority of Java developers who just "want to get stuff done."
Now of course those kinds of containers did exist. Probably the first such container was what came out of implementing the J2EE standard and that itself evolved from CORBA, which despite some bad press, wasn't as bad as some people make out (though that's perhaps a topic for a different entry.) CORBA was based around the concept of services, but back then natural unit of concurrency was the operating system process because threading wasn't a typical aspect of programming languages. Early threading implementations such as using setjmp/longjmp or Sun's LWP package for SunOS/Solaris, were very much in their infancy. When Java came along with its native support for threads, CORBA was still the popular approach for enterprise middleware, so it was fairly natural to try to take that architecture and transplant it into Java. What resulted was the initial concept of a container of interacting services minus the distribution aspect to improve performance. (It's worth noting that the success of Java/J2EE, the inclusion of Java as a supported language for CORBA, and the increase in thread support for other languages resulted in a reverse imitation with the CORBA Component Model Architecture.)
Now of course there's a lot more to a container these days than the services (capabilities) it offers. But in the good 'ol days this approach to kickstarting the J2EE revolution around CORBA resulted in some inefficient implementations. However, as they say, hindsight is always 20/20. What exacerbated things though is that despite gaining more and more experience over the years, most J2EE application servers didn't really go back to basics and tackle the problem with a clean slate. Another problem, which frameworks such as Spring tried to address, was that CORBA didn't really have a programming model, but again that's for another entry.
Unfortunately this history of early implementations hasn't necessarily always had a positive impact on current implementations. Change can be painful and slow. And many people have used these initial poor experiences with Java containers as a reason to stay away from containers entirely. That is unfortunate because there is a lot of good that containers mask and which we take for granted (knowingly or unknowingly.) They include things such as connection pooling, classloader management, thread management, security etc. Of course as developers we were able to manage much of these things before containers came on the scene and to this day. CORBA programmers did the exact same thing (anyone remember dependency hell with shared libraries in C/C++?) But for complex applications, those (in a single address space) that grow in functionality and typically built by a team or from components built by different programmers, handling things yourself can become almost a full time job in itself. These aspects of the container are useful for developers.
It's important to understand that some containers have changed for the better over the years. They've become more streamlined, fit for purpose and looking at the problem domain from a whole new perspective. The results are lightweight containers that do a few core things that all (majority) developers will always need really well and anything else is an add-on that is made available to the developer or application on an as-needed basis. The idea is that typically any of these additional capabilities are selected (dynamically or statically) with the understanding of the trade-offs they may represent, e.g., overhead versus functionality, so the selection to enable them is made as an informed choice rather than imposed by the container developers. Very much like the micro-kernels that we saw developing from the old monolithic operating systems back in the 1980's. So even if you're not
a Java EE fan or don't believe you need all of the services that often come out of the box such as transactions, messaging and security, the container's probably doing some goodness for you that you'd rather not want to handle manually.
Apart from the complexity and overhead that containers may provide (or are assumed to provide), there's the ability to dynamically update the running instance. For instance, adding a new service that wasn't available (or needed) at the time the container booted up. Or migrating a business object from one container to another which may require some dependent services to be migrated to the destination container. Or patching. Or simply adding a more up-to-date version of a service whilst retaining the old service for existing clients. Not all containers support this dynamism, but many do and it's a complexity that does not come without cost, even for the most efficient implementations.
Whether or not you agree with it, it should be apparent by now why there's a growing movement away from containers. Some people have experienced the overhead some containers impose for very little value. Some people haven't, but trust the opinions of their colleagues and friends. Still others have never been keen on containers in the first place. Whatever the reasons, the movement does exist. And then along come the new generation of (different) containers, such as docker and rocket which some in the container-less movement believe obviate the need for old-style containers entirely. The argument goes something like this (I'll use docker as an example simply because if I use the term container here it will become even more confusing!): docker produces immutable images and is very easy to use, so rather than worry about creating dynamically updateable containers within it, the old style "flat classpath"/container-less development strategies make more sense. In other words, work with the technology to get the best out of what it provides, rather than try to do more that really doesn't make sense and give you any real benefit.
This is a good argument and one that is not wrong. Docker images are certainly immutable and fairly easy to use. But that doesn't mean they obviate the need for Java containers. You've got to write your application somehow. That application may be complex, built by a team or built from components created by developers from different organisations over a period of years. And the immutability aspect of docker images is only true between instantiations of the image, i.e., the state of a running image can be changed, it's just that once it shuts down all changes are lost and any new instance starts from scratch with the original state. But docker instances may run for a long time. If they're part of a high-availability instance, with each image a replica of the others, then the replica group could be running indefinitely and the fact that changes occur to the state of one means that they are applied to the others (new replicas would have their state brought up to date as they join the group). Therefore, whilst immutability is a limitation it's no different than only having an in-memory database, for example, which has no persistent backing store: it can be architected around and could be a performance benefit.
If you believe that the mutability of a running docker instance is something that makes sense for your application or service, then long running instances are immediately part of your design philosophy. As a result, the dynamic update aspect of containers that we touched on earlier immediately becomes a useful thing to have. You may want to run multiple different instances of the "same" service. You may need to patch a running instance(s) whilst a pre-patched image is deployed into the application or replica group (eventually the pre-patched docker instances will replace the in-memory patches versions by natural attrition.)
And then we have microservices. I've said enough about them so won't go into specific details. However, with microservices we're seeing developers starting to consider SOA-like deployments for core capabilities (e.g., messaging) or business logic, outside the same address space of other capabilities which would normally be co-located within the Java container. This is very much like the original CORBA architecture and it has its merits - it is definitely a deployment architecture that continues to make sense decades after it was first put into production. But microservices don't remove the need for containers, even if they're using docker containers, which is becoming a popular implementation choice. As I said in my original article on this topic, in some ways the operating system becomes your container in this deployment approach. But within these docker instance, for example, containers are still useful.
Now of course I'm not suggesting that Java containers are the answer to all use cases. There are many examples of successful applications that don't use these containers and probably wouldn't have benefited much from them. Maybe some microservices implementations won't need them. But I do believe that others will. And of course the definition of the container depends upon where you look - just because it's not as obvious as the traditional Java container doesn't mean there's not a container somewhere, e.g., your operating system. However they're implemented, containers are useful and going container-less is really not an option.
Labels:
container-less,
docker,
J2EE,
Java EE,
microservices,
middleware
Saturday, February 28, 2015
The Undiscovered Country
I grew up in the 60's and 70's with Star Trek. In those days you didn't have hundreds of TV channels or many options of things to watch and Star Trek stood out for good reasons. I continue to be a fan today, but the original series is my favourite. So it was with great sadness that I heard of the death of Leonard Nimoy. I watched him in Mission Impossible back then too, but it was as Spock that I'll remember him the most. I thought about how I'd want to put my thoughts down but realised that someone else said it a lot better than I could many years ago.
"We are assembled here today to pay final respects to our honored dead. And yet it should be noted that in the midst of our sorrow, this death takes place in the shadow of new life, the sunrise of a new world; a world that our beloved comrade gave his life to protect and nourish. He did not feel this sacrifice a vain or empty one, and we will not debate his profound wisdom at these proceedings. Of my friend, I can only say this: of all the souls I have encountered in my travels, his was the most... human."
"We are assembled here today to pay final respects to our honored dead. And yet it should be noted that in the midst of our sorrow, this death takes place in the shadow of new life, the sunrise of a new world; a world that our beloved comrade gave his life to protect and nourish. He did not feel this sacrifice a vain or empty one, and we will not debate his profound wisdom at these proceedings. Of my friend, I can only say this: of all the souls I have encountered in my travels, his was the most... human."
Saturday, February 21, 2015
Microservices
I've been meaning to write up some thoughts I've had on microservices for quite a while but never really had the time (or made the time). However, when I was asked if I'd like to do an interview on the subject for InfoQ I figured it would be a forcing function. I think the interview was great and hopefully it helped to make it clear where I stand on the topic: whatever it's called, whether it's services-based, SOA, or microservices, the core principles have remained the same throughout the last 4 decades. What worries me about microservices, though, is the headlong rush by some to adopt something without understanding fully those principles. Yes there are some really interesting technologies around today that make it easier to develop good SOA implementations, such as Docker, Vert.x and Fabric8, but building and deploying individual services is the least of your worries when you embark on the service/SOA road.
Of course identifying the services you need is an important piece of any service-oriented architecture. How "big" they should be is implicitly part of that problem, though most successful implementations I've come across over the years rarely consider size in terms of lines of code (LOC) but rather business function(s). The emphasis on size in terms of LOC is one of the things I don't really like in microservices - unlike in the scientific world, micro here is not well defined and one persons "micro" could easily be another's "macro".
This brings us to the concept of service composition. Not all applications will be built from atomic (indivisible) services. Some services may be composed of other services. Of course there are different ways of approaching that from an implememtation perspective. For instance, a composite service could just be an intelligent endpoint (service) which acts as a coordinator for remote interactions on the other (constituent) services, such that a client never gets to interact with them directly. Another implementation could be to co-locate the consituent services in the same address space as the coordinator. Of course a good service-based implementation would not allow these implementation choices to be exposed by the composite service API to the client so that implementations could be changed without requiring changes to users.
Some other factors that must be taken into account when considering the "size" of a service or how the constituent services relate to a composite service include:
(i) performance - remote invocations, whether using a binary protocol such as AMQP, and especially text based protocols such as HTTP, are significantly slower than intra-process communication. Therefore, if performance is important and there will be many cross-service interactions, it may make sense to either consider merging services or deploying them within the same address space so that intra-process communication can be used instead. A service does not have to reside in its own distinct operating system process.
(ii) fault tolerance - despite the fact that you may be able to replicate a specific service to try to obtain high availability, there's always going to be a finite probability that your service will become unavailable - catastrophic failures do happen (entropy always increases!) And remember that it may not be possible, or at least easy, to replica some services, e.g., active replication of a service requires the business logic to be deterministic otherwise you need to use a passive replication protocol, which may adversely impact performance to the point of making it unrealistic to replicate in the first place. Therefore, if the failure of a service causes other services to be unusable (no Plan B), it may make more sense to co-locate these services into a unit of failure, such that if one is going to fail (crash) they all fail anyway.
Of course there are a number of other considerations when building applications from services and again we've been addressing them over the years in SOA deployments or earlier with services-based applications. One of the most critical is service (or task) orchestration - rarely is an application constructed from just one service and even if it were, that service may itself be using other services to perform its work. As such, applications are really the flow of control between different services and rarely is this some static a priori determination; the failure of a service may cause the application logic to determine that the next action is to invoke an operation on some other (compensation?) service. But again, as I said earlier, this is something we really should be taking for granted as understood, at least at a fundamental level. Whether or not the microservices community likes standards such as BPMN, the aims behind them or their bespoke implementations, remain and we would be ill advised to ignore. At least if you're going to build a new wheel it's a good idea to understand why the current wheel isn't good enough!
For a long time I wrote about and considered the necessity for SOA governance. In fact even before the term SOA was coined, governance has been a crucial component of any distributed system. Unfortunately it's also one of the most overlooked aspects. As we moved to a more service-oriented approach, runtime and design time governance became more and more important. How do you know the service you're about to use actually offers the right business capabilities as well as the right non-functional capabilities, e.g., can respond within the desired time, is secure, offers transactions etc? Of course there are a number of ways in which these questions can be answered, but essentially you need a contract between the client and the service. Part of the contract will inevitably have to include the service API, whether defined in WSDL, WADL, IDL or something else entirely. These days that part of (SOA) governance is not subsumed within the new term API Management. No less important, just a different categorisation. And microservices needs exactly the same thing because it really doesn't matter the size of a service - it'll have an API and hence need to be managed or governed.
Despite what I've heard about microservices, I really do believe that the existing SOA community had a lot to offer their microservices cousins; we've got to build on the experiences of the past decades and deliver better solutions to our communities rather than think that starting from scratch is the right approach.
Of course identifying the services you need is an important piece of any service-oriented architecture. How "big" they should be is implicitly part of that problem, though most successful implementations I've come across over the years rarely consider size in terms of lines of code (LOC) but rather business function(s). The emphasis on size in terms of LOC is one of the things I don't really like in microservices - unlike in the scientific world, micro here is not well defined and one persons "micro" could easily be another's "macro".
This brings us to the concept of service composition. Not all applications will be built from atomic (indivisible) services. Some services may be composed of other services. Of course there are different ways of approaching that from an implememtation perspective. For instance, a composite service could just be an intelligent endpoint (service) which acts as a coordinator for remote interactions on the other (constituent) services, such that a client never gets to interact with them directly. Another implementation could be to co-locate the consituent services in the same address space as the coordinator. Of course a good service-based implementation would not allow these implementation choices to be exposed by the composite service API to the client so that implementations could be changed without requiring changes to users.
Some other factors that must be taken into account when considering the "size" of a service or how the constituent services relate to a composite service include:
(i) performance - remote invocations, whether using a binary protocol such as AMQP, and especially text based protocols such as HTTP, are significantly slower than intra-process communication. Therefore, if performance is important and there will be many cross-service interactions, it may make sense to either consider merging services or deploying them within the same address space so that intra-process communication can be used instead. A service does not have to reside in its own distinct operating system process.
(ii) fault tolerance - despite the fact that you may be able to replicate a specific service to try to obtain high availability, there's always going to be a finite probability that your service will become unavailable - catastrophic failures do happen (entropy always increases!) And remember that it may not be possible, or at least easy, to replica some services, e.g., active replication of a service requires the business logic to be deterministic otherwise you need to use a passive replication protocol, which may adversely impact performance to the point of making it unrealistic to replicate in the first place. Therefore, if the failure of a service causes other services to be unusable (no Plan B), it may make more sense to co-locate these services into a unit of failure, such that if one is going to fail (crash) they all fail anyway.
Of course there are a number of other considerations when building applications from services and again we've been addressing them over the years in SOA deployments or earlier with services-based applications. One of the most critical is service (or task) orchestration - rarely is an application constructed from just one service and even if it were, that service may itself be using other services to perform its work. As such, applications are really the flow of control between different services and rarely is this some static a priori determination; the failure of a service may cause the application logic to determine that the next action is to invoke an operation on some other (compensation?) service. But again, as I said earlier, this is something we really should be taking for granted as understood, at least at a fundamental level. Whether or not the microservices community likes standards such as BPMN, the aims behind them or their bespoke implementations, remain and we would be ill advised to ignore. At least if you're going to build a new wheel it's a good idea to understand why the current wheel isn't good enough!
For a long time I wrote about and considered the necessity for SOA governance. In fact even before the term SOA was coined, governance has been a crucial component of any distributed system. Unfortunately it's also one of the most overlooked aspects. As we moved to a more service-oriented approach, runtime and design time governance became more and more important. How do you know the service you're about to use actually offers the right business capabilities as well as the right non-functional capabilities, e.g., can respond within the desired time, is secure, offers transactions etc? Of course there are a number of ways in which these questions can be answered, but essentially you need a contract between the client and the service. Part of the contract will inevitably have to include the service API, whether defined in WSDL, WADL, IDL or something else entirely. These days that part of (SOA) governance is not subsumed within the new term API Management. No less important, just a different categorisation. And microservices needs exactly the same thing because it really doesn't matter the size of a service - it'll have an API and hence need to be managed or governed.
Despite what I've heard about microservices, I really do believe that the existing SOA community had a lot to offer their microservices cousins; we've got to build on the experiences of the past decades and deliver better solutions to our communities rather than think that starting from scratch is the right approach.
Saturday, February 07, 2015
Container-less development
In the Java world been hearing a lot lately about container-less development. (Note, I'm not talking about containers such as docker.) Whether it's to help build microservices, to reduce complexity for Java EE developers, or some other reasons, moving away from containers seems to be the theme of the day. One of the core aims behind the movement away from containers appears to be simplifying the lives of application developers and that's definitely a good thing.
In general anything we can do to improve the development experience is always a good thing. However, I worry that the idea of moving away from containers is not necessarily going to make the lives of developers easier in the long term. Let's spend a moment to look at some of the things we've heard as complaints for container-driven development. I'll paraphrase, but ... "They make it too complex to do easy things." Or "Containers are just too bloated and get in the way of agile development." Or "The notion of containers is an anti-pattern from the 20th century." Or even "Testing code in containers is just too hard."
Now before we try to address these concerns, let's look at something I said to Markus in a recent interview. "Any container, whether it's something like docker, the JVM or even a Java EE application server, shouldn't really get in your way as a developer but should also offer you the capabilities you need for building and running your applications in a way that is easy to use, understand and manage. If you think about it, your laptop is a container. The operating system is a container. We take these for granted because over the years we've gotten really really good at building them to be unobtrusive. Yet if you look under the covers at your typical operating system, it's doing a lot of hard work for you and offering you capabilities such as process forking and scheduling, that you don't know you need but you do need them."
It's easy to make blanket statements like "containers are bad for agile development" or "containers are not fit for modern web apps", but the reality is somewhat different. Of course there may be specific examples of containers where these statements are correct, but let's try and remain objective here! As I mentioned to Markus, we're using containers in our daily development lives and not even aware of them most of the time. A good container, whether an operating system or a Java EE application server, shouldn't get in your way but should be there when you need it. When you don't need it, it's sitting in the background consuming limited memory and processor time, perhaps still ensuring that certain bad things don't happen to your application while it's running and which you didn't even consider initially, e.g., how often do you consider that your operating system is providing red zone protection for the individual processes?
As I said, a good container shouldn't get in your way. However, that doesn't mean it isn't needed. Many applications start out a lot simpler than they end up. You may not consider security initially, for instance, but if your application/service is going to be used by more than you and especially if it's going to be available globally, then it's something you're going to need eventually and a good container should be able to either take care of that for you opaquely or offer a simple to use interface. In essence, a good (ideal?) container should be like your favourite operating system - doing things in the background that you need but don't want to really understand, and offering easy to use APIs for those services you do need.
For enterprise users (and I include Web developers in that category) those services would include security, data management (RDBMS, NoSQL), messaging (not everything will communicate using HTTP) and transactions (yes, some people may not like them but they're essential for many types of application to ensure consistency in a local and distributed case). I'm not going to suggest that there's any such thing as the ideal/perfect container in the Java world today. There are definitely some implementations that would want you to consider seriously looking at container-less solutions! However, there are several implementations that have made significant strides in improving the developer experience and pushing themselves into the background, becoming part of the substrate. And a number of developer tools have sprung up to help developers further, such as Forge and Arquillian.
If you consider what lead to the rise of containers, it wasn't because someone somewhere thought "Oh wouldn't it be good if I threw everything and the kitchen sink into a deployment environment". Believe it or not there was a time before containers. Back then we didn't have multi-threaded languages. Everything was interconnected individual services, communicating using bespoke protocols. Your application was probably one or more services and clients, again communicating to get things done. If all of these services ran on the same machine (entirely possible) then once again you could consider the operating system as your application deployment container.
These services were there for a reason though: applications needed them! The development of containers as we know them today was therefore a natural evolution given improvements in language capabilities and hardware performance (reduce the interprocess communication at the very least). Granted we may not have focussed enough on making the development of applications with containers a seamless and natural thing. But that doesn't obviate the need.
Consider the container-less approach. For some applications this may be the right approach, just as we've never said that container-based development (or Java EE) was right for all applications. But as the complexity of the application or individual service grows and there's a need for more functionality (e.g., caching or security) then application developers shouldn't have to worry about which caching implementation is the best for their environment, or which version works well with the other functional components they're relying upon. Eventually container-less frameworks will start to address these concerns and add the "missing" features, whether as interconnected individual (micro?) services in their own address spaces or co-located with the application code/business logic but (hopefully) in an opaque manner that doesn't get in the way of the developer. Once we start down that road we're heading towards something that looks very similar to a container.
Rather than throw away the inherent benefits of containers, I think we should be working to make them even easier to use. Maybe this requires changes in standards, where those containers are based upon them. Maybe it's giving feedback to the container developers on what's getting in the way. Maybe it's working with the container-less efforts to build next generation containers that fit into their development and deployment experience seamlessly. There are a number of ways this can go, but none of them are really container-less.
Subscribe to:
Posts (Atom)