I grew up in the 60's and 70's with Star Trek. In those days you didn't have hundreds of TV channels or many options of things to watch and Star Trek stood out for good reasons. I continue to be a fan today, but the original series is my favourite. So it was with great sadness that I heard of the death of Leonard Nimoy. I watched him in Mission Impossible back then too, but it was as Spock that I'll remember him the most. I thought about how I'd want to put my thoughts down but realised that someone else said it a lot better than I could many years ago.
"We are assembled here today to pay final respects to our honored dead. And yet it should be noted that in the midst of our sorrow, this death takes place in the shadow of new life, the sunrise of a new world; a world that our beloved comrade gave his life to protect and nourish. He did not feel this sacrifice a vain or empty one, and we will not debate his profound wisdom at these proceedings. Of my friend, I can only say this: of all the souls I have encountered in my travels, his was the most... human."
Saturday, February 28, 2015
Saturday, February 21, 2015
Microservices
I've been meaning to write up some thoughts I've had on microservices for quite a while but never really had the time (or made the time). However, when I was asked if I'd like to do an interview on the subject for InfoQ I figured it would be a forcing function. I think the interview was great and hopefully it helped to make it clear where I stand on the topic: whatever it's called, whether it's services-based, SOA, or microservices, the core principles have remained the same throughout the last 4 decades. What worries me about microservices, though, is the headlong rush by some to adopt something without understanding fully those principles. Yes there are some really interesting technologies around today that make it easier to develop good SOA implementations, such as Docker, Vert.x and Fabric8, but building and deploying individual services is the least of your worries when you embark on the service/SOA road.
Of course identifying the services you need is an important piece of any service-oriented architecture. How "big" they should be is implicitly part of that problem, though most successful implementations I've come across over the years rarely consider size in terms of lines of code (LOC) but rather business function(s). The emphasis on size in terms of LOC is one of the things I don't really like in microservices - unlike in the scientific world, micro here is not well defined and one persons "micro" could easily be another's "macro".
This brings us to the concept of service composition. Not all applications will be built from atomic (indivisible) services. Some services may be composed of other services. Of course there are different ways of approaching that from an implememtation perspective. For instance, a composite service could just be an intelligent endpoint (service) which acts as a coordinator for remote interactions on the other (constituent) services, such that a client never gets to interact with them directly. Another implementation could be to co-locate the consituent services in the same address space as the coordinator. Of course a good service-based implementation would not allow these implementation choices to be exposed by the composite service API to the client so that implementations could be changed without requiring changes to users.
Some other factors that must be taken into account when considering the "size" of a service or how the constituent services relate to a composite service include:
(i) performance - remote invocations, whether using a binary protocol such as AMQP, and especially text based protocols such as HTTP, are significantly slower than intra-process communication. Therefore, if performance is important and there will be many cross-service interactions, it may make sense to either consider merging services or deploying them within the same address space so that intra-process communication can be used instead. A service does not have to reside in its own distinct operating system process.
(ii) fault tolerance - despite the fact that you may be able to replicate a specific service to try to obtain high availability, there's always going to be a finite probability that your service will become unavailable - catastrophic failures do happen (entropy always increases!) And remember that it may not be possible, or at least easy, to replica some services, e.g., active replication of a service requires the business logic to be deterministic otherwise you need to use a passive replication protocol, which may adversely impact performance to the point of making it unrealistic to replicate in the first place. Therefore, if the failure of a service causes other services to be unusable (no Plan B), it may make more sense to co-locate these services into a unit of failure, such that if one is going to fail (crash) they all fail anyway.
Of course there are a number of other considerations when building applications from services and again we've been addressing them over the years in SOA deployments or earlier with services-based applications. One of the most critical is service (or task) orchestration - rarely is an application constructed from just one service and even if it were, that service may itself be using other services to perform its work. As such, applications are really the flow of control between different services and rarely is this some static a priori determination; the failure of a service may cause the application logic to determine that the next action is to invoke an operation on some other (compensation?) service. But again, as I said earlier, this is something we really should be taking for granted as understood, at least at a fundamental level. Whether or not the microservices community likes standards such as BPMN, the aims behind them or their bespoke implementations, remain and we would be ill advised to ignore. At least if you're going to build a new wheel it's a good idea to understand why the current wheel isn't good enough!
For a long time I wrote about and considered the necessity for SOA governance. In fact even before the term SOA was coined, governance has been a crucial component of any distributed system. Unfortunately it's also one of the most overlooked aspects. As we moved to a more service-oriented approach, runtime and design time governance became more and more important. How do you know the service you're about to use actually offers the right business capabilities as well as the right non-functional capabilities, e.g., can respond within the desired time, is secure, offers transactions etc? Of course there are a number of ways in which these questions can be answered, but essentially you need a contract between the client and the service. Part of the contract will inevitably have to include the service API, whether defined in WSDL, WADL, IDL or something else entirely. These days that part of (SOA) governance is not subsumed within the new term API Management. No less important, just a different categorisation. And microservices needs exactly the same thing because it really doesn't matter the size of a service - it'll have an API and hence need to be managed or governed.
Despite what I've heard about microservices, I really do believe that the existing SOA community had a lot to offer their microservices cousins; we've got to build on the experiences of the past decades and deliver better solutions to our communities rather than think that starting from scratch is the right approach.
Of course identifying the services you need is an important piece of any service-oriented architecture. How "big" they should be is implicitly part of that problem, though most successful implementations I've come across over the years rarely consider size in terms of lines of code (LOC) but rather business function(s). The emphasis on size in terms of LOC is one of the things I don't really like in microservices - unlike in the scientific world, micro here is not well defined and one persons "micro" could easily be another's "macro".
This brings us to the concept of service composition. Not all applications will be built from atomic (indivisible) services. Some services may be composed of other services. Of course there are different ways of approaching that from an implememtation perspective. For instance, a composite service could just be an intelligent endpoint (service) which acts as a coordinator for remote interactions on the other (constituent) services, such that a client never gets to interact with them directly. Another implementation could be to co-locate the consituent services in the same address space as the coordinator. Of course a good service-based implementation would not allow these implementation choices to be exposed by the composite service API to the client so that implementations could be changed without requiring changes to users.
Some other factors that must be taken into account when considering the "size" of a service or how the constituent services relate to a composite service include:
(i) performance - remote invocations, whether using a binary protocol such as AMQP, and especially text based protocols such as HTTP, are significantly slower than intra-process communication. Therefore, if performance is important and there will be many cross-service interactions, it may make sense to either consider merging services or deploying them within the same address space so that intra-process communication can be used instead. A service does not have to reside in its own distinct operating system process.
(ii) fault tolerance - despite the fact that you may be able to replicate a specific service to try to obtain high availability, there's always going to be a finite probability that your service will become unavailable - catastrophic failures do happen (entropy always increases!) And remember that it may not be possible, or at least easy, to replica some services, e.g., active replication of a service requires the business logic to be deterministic otherwise you need to use a passive replication protocol, which may adversely impact performance to the point of making it unrealistic to replicate in the first place. Therefore, if the failure of a service causes other services to be unusable (no Plan B), it may make more sense to co-locate these services into a unit of failure, such that if one is going to fail (crash) they all fail anyway.
Of course there are a number of other considerations when building applications from services and again we've been addressing them over the years in SOA deployments or earlier with services-based applications. One of the most critical is service (or task) orchestration - rarely is an application constructed from just one service and even if it were, that service may itself be using other services to perform its work. As such, applications are really the flow of control between different services and rarely is this some static a priori determination; the failure of a service may cause the application logic to determine that the next action is to invoke an operation on some other (compensation?) service. But again, as I said earlier, this is something we really should be taking for granted as understood, at least at a fundamental level. Whether or not the microservices community likes standards such as BPMN, the aims behind them or their bespoke implementations, remain and we would be ill advised to ignore. At least if you're going to build a new wheel it's a good idea to understand why the current wheel isn't good enough!
For a long time I wrote about and considered the necessity for SOA governance. In fact even before the term SOA was coined, governance has been a crucial component of any distributed system. Unfortunately it's also one of the most overlooked aspects. As we moved to a more service-oriented approach, runtime and design time governance became more and more important. How do you know the service you're about to use actually offers the right business capabilities as well as the right non-functional capabilities, e.g., can respond within the desired time, is secure, offers transactions etc? Of course there are a number of ways in which these questions can be answered, but essentially you need a contract between the client and the service. Part of the contract will inevitably have to include the service API, whether defined in WSDL, WADL, IDL or something else entirely. These days that part of (SOA) governance is not subsumed within the new term API Management. No less important, just a different categorisation. And microservices needs exactly the same thing because it really doesn't matter the size of a service - it'll have an API and hence need to be managed or governed.
Despite what I've heard about microservices, I really do believe that the existing SOA community had a lot to offer their microservices cousins; we've got to build on the experiences of the past decades and deliver better solutions to our communities rather than think that starting from scratch is the right approach.
Saturday, February 07, 2015
Container-less development
In the Java world been hearing a lot lately about container-less development. (Note, I'm not talking about containers such as docker.) Whether it's to help build microservices, to reduce complexity for Java EE developers, or some other reasons, moving away from containers seems to be the theme of the day. One of the core aims behind the movement away from containers appears to be simplifying the lives of application developers and that's definitely a good thing.
In general anything we can do to improve the development experience is always a good thing. However, I worry that the idea of moving away from containers is not necessarily going to make the lives of developers easier in the long term. Let's spend a moment to look at some of the things we've heard as complaints for container-driven development. I'll paraphrase, but ... "They make it too complex to do easy things." Or "Containers are just too bloated and get in the way of agile development." Or "The notion of containers is an anti-pattern from the 20th century." Or even "Testing code in containers is just too hard."
Now before we try to address these concerns, let's look at something I said to Markus in a recent interview. "Any container, whether it's something like docker, the JVM or even a Java EE application server, shouldn't really get in your way as a developer but should also offer you the capabilities you need for building and running your applications in a way that is easy to use, understand and manage. If you think about it, your laptop is a container. The operating system is a container. We take these for granted because over the years we've gotten really really good at building them to be unobtrusive. Yet if you look under the covers at your typical operating system, it's doing a lot of hard work for you and offering you capabilities such as process forking and scheduling, that you don't know you need but you do need them."
It's easy to make blanket statements like "containers are bad for agile development" or "containers are not fit for modern web apps", but the reality is somewhat different. Of course there may be specific examples of containers where these statements are correct, but let's try and remain objective here! As I mentioned to Markus, we're using containers in our daily development lives and not even aware of them most of the time. A good container, whether an operating system or a Java EE application server, shouldn't get in your way but should be there when you need it. When you don't need it, it's sitting in the background consuming limited memory and processor time, perhaps still ensuring that certain bad things don't happen to your application while it's running and which you didn't even consider initially, e.g., how often do you consider that your operating system is providing red zone protection for the individual processes?
As I said, a good container shouldn't get in your way. However, that doesn't mean it isn't needed. Many applications start out a lot simpler than they end up. You may not consider security initially, for instance, but if your application/service is going to be used by more than you and especially if it's going to be available globally, then it's something you're going to need eventually and a good container should be able to either take care of that for you opaquely or offer a simple to use interface. In essence, a good (ideal?) container should be like your favourite operating system - doing things in the background that you need but don't want to really understand, and offering easy to use APIs for those services you do need.
For enterprise users (and I include Web developers in that category) those services would include security, data management (RDBMS, NoSQL), messaging (not everything will communicate using HTTP) and transactions (yes, some people may not like them but they're essential for many types of application to ensure consistency in a local and distributed case). I'm not going to suggest that there's any such thing as the ideal/perfect container in the Java world today. There are definitely some implementations that would want you to consider seriously looking at container-less solutions! However, there are several implementations that have made significant strides in improving the developer experience and pushing themselves into the background, becoming part of the substrate. And a number of developer tools have sprung up to help developers further, such as Forge and Arquillian.
If you consider what lead to the rise of containers, it wasn't because someone somewhere thought "Oh wouldn't it be good if I threw everything and the kitchen sink into a deployment environment". Believe it or not there was a time before containers. Back then we didn't have multi-threaded languages. Everything was interconnected individual services, communicating using bespoke protocols. Your application was probably one or more services and clients, again communicating to get things done. If all of these services ran on the same machine (entirely possible) then once again you could consider the operating system as your application deployment container.
These services were there for a reason though: applications needed them! The development of containers as we know them today was therefore a natural evolution given improvements in language capabilities and hardware performance (reduce the interprocess communication at the very least). Granted we may not have focussed enough on making the development of applications with containers a seamless and natural thing. But that doesn't obviate the need.
Consider the container-less approach. For some applications this may be the right approach, just as we've never said that container-based development (or Java EE) was right for all applications. But as the complexity of the application or individual service grows and there's a need for more functionality (e.g., caching or security) then application developers shouldn't have to worry about which caching implementation is the best for their environment, or which version works well with the other functional components they're relying upon. Eventually container-less frameworks will start to address these concerns and add the "missing" features, whether as interconnected individual (micro?) services in their own address spaces or co-located with the application code/business logic but (hopefully) in an opaque manner that doesn't get in the way of the developer. Once we start down that road we're heading towards something that looks very similar to a container.
Rather than throw away the inherent benefits of containers, I think we should be working to make them even easier to use. Maybe this requires changes in standards, where those containers are based upon them. Maybe it's giving feedback to the container developers on what's getting in the way. Maybe it's working with the container-less efforts to build next generation containers that fit into their development and deployment experience seamlessly. There are a number of ways this can go, but none of them are really container-less.
Monday, October 27, 2014
Time to reflect
I don't use my blog as much as I used to due to lack of time and something to say that I don't say through other avenues. But something happened today that made me stop and think that perhaps I could use this blog for a more personal posting than usual.
I've always thought that life is precious and yet we often take it for granted, typically until the last moment. We make a big show of people being born because whatever your faith or beliefs, seeing a new life born into the world is a wonderful thing! Death is often more dour and a more personal thing. Typically unless someone we knew died, we all only hear about the deaths of celebrities, many of whom probably had little or no impact on our own lives.
Death is a sad enough occasion at the best of times. Again depending upon your faith or belief system it probably is the last time that unique individual will set foot on this planet and mingle with people here. Some aspects of what made them human, such as the raw materials, will eventually find their way back into the environment and, just as we're all made of "star stuff", back into other people in one way or another. But their uniqueness, their individuality, is gone forever - as best we can tell. That is sad. At it's rawest, this is a loss of information that can never be retrieved. A loss of memories, experiences etc. that helped to make the person who they were.
We often hear statements like "they're not dead as long as we remember them". Thinking about the sentiment behind these kinds of statements it makes sense. And we can all probably know someone who died, family or celebrity, that we remember fondly. But what of those people who have no one? That's the biggest loss of all: there's no one to remember them, to remember what made them unique within the 7 billion people on the planet. Maybe they weren't celebrities. Maybe they weren't world leaders or people who went down in the history books. But they were people nonetheless and to not be remembered is like them falling into a black hole, where no trace remains.
If you've gotten this far then you may be wondering why I'm writing this. I live with my family in an area of the country that means we have only 2 neighbouring houses. Both houses have people in them who have lived there for over 7 decades (we've been here for 14 years). Today one of those people, John Hudspith, died. He was 80 and a kind, quietly spoken gentleman; a man of his era. But he was alone. No family left alive. Few friends, other than ourselves and the other neighbours; none really close. Even then he was a private person. And it struck me that in his death he would be forgotten because he lacked celebrity status or family or history-worthiness. Well this is my small attempt to give him a little immortality, because if you've read here then you've paused for a moment to wonder about this John Hudspith, who he was and why I would want to remember him. Thank you.
I've always thought that life is precious and yet we often take it for granted, typically until the last moment. We make a big show of people being born because whatever your faith or beliefs, seeing a new life born into the world is a wonderful thing! Death is often more dour and a more personal thing. Typically unless someone we knew died, we all only hear about the deaths of celebrities, many of whom probably had little or no impact on our own lives.
Death is a sad enough occasion at the best of times. Again depending upon your faith or belief system it probably is the last time that unique individual will set foot on this planet and mingle with people here. Some aspects of what made them human, such as the raw materials, will eventually find their way back into the environment and, just as we're all made of "star stuff", back into other people in one way or another. But their uniqueness, their individuality, is gone forever - as best we can tell. That is sad. At it's rawest, this is a loss of information that can never be retrieved. A loss of memories, experiences etc. that helped to make the person who they were.
We often hear statements like "they're not dead as long as we remember them". Thinking about the sentiment behind these kinds of statements it makes sense. And we can all probably know someone who died, family or celebrity, that we remember fondly. But what of those people who have no one? That's the biggest loss of all: there's no one to remember them, to remember what made them unique within the 7 billion people on the planet. Maybe they weren't celebrities. Maybe they weren't world leaders or people who went down in the history books. But they were people nonetheless and to not be remembered is like them falling into a black hole, where no trace remains.
If you've gotten this far then you may be wondering why I'm writing this. I live with my family in an area of the country that means we have only 2 neighbouring houses. Both houses have people in them who have lived there for over 7 decades (we've been here for 14 years). Today one of those people, John Hudspith, died. He was 80 and a kind, quietly spoken gentleman; a man of his era. But he was alone. No family left alive. Few friends, other than ourselves and the other neighbours; none really close. Even then he was a private person. And it struck me that in his death he would be forgotten because he lacked celebrity status or family or history-worthiness. Well this is my small attempt to give him a little immortality, because if you've read here then you've paused for a moment to wonder about this John Hudspith, who he was and why I would want to remember him. Thank you.
Tuesday, October 14, 2014
Encrypting Data?
I read that the FBI doesn't want Google or Apple to encrypt data on phones by default. Their reasoning is that it makes it harder for them to track evil doers. I do understand their concerns but I don't believe in their solution: no encryption, or give them keys to decrypt. It's not that I distrust the police or security forces or believe criminals should be able to get away with their crimes, but if my data can be decrypted by one group then there's a good chance it can be decrypted by others (backdoors can and will be exploited). I don't encrypt my data to hide it from the law; I encrypt it to stop it getting into the hands of criminals and people who could use it against me or others! And if we're not allowed to encrypt phones then what's next? Laptops? Cloud?
Unencrypted data may make their job easier, but surely they do detective work too? Just imagine if the FBIs approach had been enabled decades or even centuries ago. Letters couldn't go into envelopes or envelopes could be opened at any time (probably happened/happens today anyway); it would be illegal to write in anything other than plain English or natural languages (no codes); presumable all of your data would be easily accessible (no bank vaults, or their codes would have to be available to the police without a warrant!) The latest Sherlock Holmes stories would be very mundane as he'd just need to access the criminals' documents to discover their evil plans.
The reality is that encryption of data, hiding of that data, has always happened. Whether it's Sherlock Holmes story The Adventure of the Dancing Men, the Germans during WW2, the Romans, there are countless examples of coded information being used for one reason or another. And good detective work, aided by people in the field, has always been at the heart of the solutions. I don't want criminals to have access to my data and if that means the police need to do a bit more work then so be it.
Unencrypted data may make their job easier, but surely they do detective work too? Just imagine if the FBIs approach had been enabled decades or even centuries ago. Letters couldn't go into envelopes or envelopes could be opened at any time (probably happened/happens today anyway); it would be illegal to write in anything other than plain English or natural languages (no codes); presumable all of your data would be easily accessible (no bank vaults, or their codes would have to be available to the police without a warrant!) The latest Sherlock Holmes stories would be very mundane as he'd just need to access the criminals' documents to discover their evil plans.
The reality is that encryption of data, hiding of that data, has always happened. Whether it's Sherlock Holmes story The Adventure of the Dancing Men, the Germans during WW2, the Romans, there are countless examples of coded information being used for one reason or another. And good detective work, aided by people in the field, has always been at the heart of the solutions. I don't want criminals to have access to my data and if that means the police need to do a bit more work then so be it.
Monday, May 26, 2014
Microservices and transactions - turning back the clock?!
A cross-post of an entry I wrote for the JBossTS/Narayana blog. Microservices and transactions, oh my!
Thursday, May 08, 2014
Microservices Architecture
I wrote a piece on InfoQ a while back on Microservices and SOA. While researching for the article and afterwards, I was struck by something else I wrote almost 8 years ago around SOA 2.0. I've got to say that I see a lot of similarities: people trying to come up with new terms for something that already exists and which really doesn't need to be redefined just better understood! Please, no Microservices. Let's stick with SOA!
Sunday, April 27, 2014
The future adaptive middleware platform
Back in the 1990's, way before the success of Java let alone the advent of Cloud, there was a lot of research in the area of configurable and adaptable distributed systems. There was even a conference, the IEEE Conference on Configurable Distributed Systems. Some of the earliest work on distributed agents, adaptable mobile systems and weak consistency happened here and I'm glad to have been a part of that effort. However, two decades ago the kinds of environments that we envisioned were really just the thing of blue-sky research. Times change though and I believe that a lot of what we did back then, and research that has happened in the intervening years, is now much more applicable and in fact necessary for the kind of middleware platform that we need today.
For a start one of the aims of some of the research we did was environments that could self manage, monitor themselves and adapt to change, whether due to failures in network topology and machines, or changes in application (user) requirements. These were systems that we believed would need to be deployed for long periods of time with little or no human intervention. Precisely the kinds of environments that we are considering today for Cloud and IoT: reducing the number of system administrators needed to manage your applications is a key aim for the Cloud, and imagine if you had to keep logging in to your smart sensors every time the wifi went down or a new device was added to the network.
There's a lot that can change within a distributed system and much of it inadvertent or unknowable a priori. This includes failures of the network (partitions, overloading) and the nodes (crash failures or overloading making the machine so slow that it appears to have crashed). Machines being added to the environment may mean that it's more efficient to migrate entire applications or components (services) to these new arrivals to maintain desired SLAs, such as performance or reliability. Likewise a machine may become so overloaded that it's simply impossible to maintain an SLA and so migration off it elsewhere may be necessary. Traditional monitoring and management approaches would work here but tend to be far too manual. This tends to mean that whilst problems (faults) can be tolerated, the negative impact on clients and applications, such as downtime, can be too much.
The middleware should be able to detect these kinds of problems or inabilities to match SLAs, or even predict that these SLA conflicts are going to occur (Bayesian Inference Networks are good for this). It may seem like a relatively simple (or subtle) addition to monitoring and management (or governance) but it's crucial. Adding this capability doesn't just make the middleware infrastructure from being a little more useful and capable, it elevates it to an entirely different level. The infrastructure needs to have SLAs and QoS built in from the start for components as well as higher level services. JON-like in monitoring and managing the surroundings as well as itself.
Each component needs to potentially be driven through a smart proxy so that things can be dynamically switched from local to remote implementations. Maybe environment specific component implementations if existing ones cannot fit or be tuned to fit, e.g., a component written in C for embedded environments where the JVM cannot run due to space limitations. It also needs to add in something like stub-scion pairs (Shapiro in the 80's or Caughey with Shadows in the 90's) to allow for object migration with dependency tracking and migration. Also add in disconnected operation work from the 80's and 90's: yes, the network has improved a lot over the years but disconnection is more likely now because we are used to being connected so much.
We need new frameworks and models for building applications, though current approaches should work. Being transparent is best, but opaque allows for using existing applications. Each component needs something that implements a reconfigure/adapt interface. Listens on a bus for these events. Adapt based on available memory, processor changes such as speed or number, network characteristics, disconnection, load on processor, dependency on other components, etc. Include the dispatcher architecture to help adaptation at various levels throughout the invocation stack.
OK so let's summarise the features/capabilities and add a few things that are implicit:
- Can adapt to changes in environment. Autonomous monitoring and management.
- All components can have contracts and SLA.
- Event oriented backbone/backplane/bus.
- Asynchronous interactions. Synchronous on top if necessary.
- Flexible threading model.
- Core low overhead and footprint. Assumed to be everywhere and all applications or services plug into it. So much of these other capabilities would be dynamically added when/if needed, such that the core needs to know how to do very little initially.
- Repository of components with social-like tagging for contract selection.
- Native language components ranging from thin client through to entire stack.
- DNA/RNA-like configuration to allow for recovery in the event of catastrophic failure or to support complete or partial migration to new (blank/empty) hardware.
- Self healing.
- Tracking of dependencies between objects or services so that if the migration of an object/service is necessary (e.g., due to a node failure or being powered down), all related services/objects can be migrated as well to ensure continued operation.
So this is it in a (relative) nutshell. The links I've included throughout should help to give more details. If I get the time I may expand on some of these topics in future entries. Even better, maybe I'll also get a chance to present on one or more of these items at a conference or workshop. One of the problems I have with my job is that there's so much to do and so little time. Whilst all of this is wrapped up inside my head, the time to write it down doesn't come in a block large enough to do it justice in one sitting. Rather than spend months writing it up, I wanted to take a good open source approach and "release early, release often". Consider this a milestone or beta release.
Sunday, April 13, 2014
Rush and Senna
This is going to seem like a strange entry but I wanted to record it if only for myself. When I was growing up I was a serious Formula One fan and two events remain in my mind as defining moments of enjoying the sport. The first was James Hunt winning the F1 Championship (and Niki Lauda’s accident), and the second was the death of Ayrton Senna. I wasn’t watching the former live, but saw it on repeats (TV was much more limited in those days, so live feeds were few and far between). The latter I was watching with friends as it happened and can still remember where I was to this day - in the dorms at university, having just eaten lunch! There was excitement, risk and sadness in equal measure for each event and yet over the years they dwindled in my mind as other things, such as real life, took hold and dragged me through the present and into the future.
That was until I watched the movies Senna and Rush over the course of the last couple of years. Both of these movies are very different beasts and yet both of them excel in brining to the watcher that sense of what is was like being there at the time. As someone who was there at the time and someone who watches F1 today when I have time, they also illustrate what I think the sport misses these days: excitement and challenges (ok, the risk is a big element here). If you’re at all interested in Formula One then check out either or both of these films!
That was until I watched the movies Senna and Rush over the course of the last couple of years. Both of these movies are very different beasts and yet both of them excel in brining to the watcher that sense of what is was like being there at the time. As someone who was there at the time and someone who watches F1 today when I have time, they also illustrate what I think the sport misses these days: excitement and challenges (ok, the risk is a big element here). If you’re at all interested in Formula One then check out either or both of these films!
Wednesday, April 09, 2014
Speaking at Dynamo14
I'm going to be speaking at the Dynamo14 conference in a few weeks time. I'll be hot off a plane from Boston to be interviewed by the BBCs Rory Cellan-Jones! Looking forward to it, though I hope I'm not too jet lagged!
Wednesday, April 02, 2014
Another "I told you so" moment
I mentioned earlier about how most cloud providers had agreed that there is such a thing as Private Cloud. It's nice to hear that it's a "tectonic shift" now ;-)
Saturday, January 25, 2014
Hybrid Cloud
I've read a few articles from analysts or media recently where they talk about 2014 as the year of Hybrid Cloud. I think this is good to hear, but I also wonder why it's taken so long for some in the industry to accept the inevitable? OK that's more of a rhetorical question, since I do know the answer and I'm sure many of you will too.
In reality I find it amusing, with more of an "I told you so" approach! Several people, as well as myself, have been talking about the need for private cloud. Whether it's issues with security, reliability or simply getting data into and from public clouds, public cloud has limitations and is definitely not right for everyone or every application/service. That's not to say that private clouds are problem free but it's not necessarily the same problems and neither should it be.
The combination of private and public clouds, coupled with cloud bursting when needed, is what we should all be aiming for, with pure private or pure public use simply a degenerate case. I find it interesting that Amazon now has a position on private and hybrid clouds, which is very different from what they were saying 4 years ago.
I also still believe, as I stated back then, that the whole definition of cloud needs to change. Whether it's ubiquitous computing or the Internet of Things, the explosion of devices around us and their capabilities (processor speed, memory, network access, etc.) has to become part of the cloud, and probably its biggest part. It makes no sense to ignore the powerful options that this gives us, let alone the fact we'll have no choice due to Shannon's Limit.
So although 2014 may be the year of hybrid cloud, it's the true cloud of devices that I'm still more excited about. Most of the options it creates still lie unexplored before us and I think that's where we should be adding a little more focus.
In reality I find it amusing, with more of an "I told you so" approach! Several people, as well as myself, have been talking about the need for private cloud. Whether it's issues with security, reliability or simply getting data into and from public clouds, public cloud has limitations and is definitely not right for everyone or every application/service. That's not to say that private clouds are problem free but it's not necessarily the same problems and neither should it be.
The combination of private and public clouds, coupled with cloud bursting when needed, is what we should all be aiming for, with pure private or pure public use simply a degenerate case. I find it interesting that Amazon now has a position on private and hybrid clouds, which is very different from what they were saying 4 years ago.
I also still believe, as I stated back then, that the whole definition of cloud needs to change. Whether it's ubiquitous computing or the Internet of Things, the explosion of devices around us and their capabilities (processor speed, memory, network access, etc.) has to become part of the cloud, and probably its biggest part. It makes no sense to ignore the powerful options that this gives us, let alone the fact we'll have no choice due to Shannon's Limit.
So although 2014 may be the year of hybrid cloud, it's the true cloud of devices that I'm still more excited about. Most of the options it creates still lie unexplored before us and I think that's where we should be adding a little more focus.
Tuesday, December 31, 2013
CapeDwarf, GAE and the Raspberry Pi
This Christmas I decided to take a look at turning my collection of Raspberry Pi's into a private Cloud. I had two options: OpenShift Origins or CapeDwarf. I decided to go with CapeDwarf since it runs on OpenShift as well.
I started with the latest v1 of CapeDwarf which requires JBossAS 7.2, with the intention to build everything from scratch. However, there were some issues due to the migration of AS7 to WildFly as well as with that version of CapeDwarf, so after talking with the team I decided to wait until they released the 2.0.0.Beta1 version just after Christmas.
I may still try and build everything from scratch at some point, but to save time I decided to go with the pre-build binary distribution and verify that it worked on the Pi. Before we start, make sure you have the right version of maven installed (3 onwards):
I started with the latest v1 of CapeDwarf which requires JBossAS 7.2, with the intention to build everything from scratch. However, there were some issues due to the migration of AS7 to WildFly as well as with that version of CapeDwarf, so after talking with the team I decided to wait until they released the 2.0.0.Beta1 version just after Christmas.
I may still try and build everything from scratch at some point, but to save time I decided to go with the pre-build binary distribution and verify that it worked on the Pi. Before we start, make sure you have the right version of maven installed (3 onwards):
This isn't right, so let's update maven (sudo apt-get install maven git). And obviously ensure you've got a version of JDK 7 installed (this is the page I refer to when updating). Next download the pre-built Beta 6. Try running that (don't forget to set JBOSS_HOME):
And eventually …
It's always nice to see WildFly start, unmodified, on a Raspberry Pi :-) You can test this WildFly deployment by going to the management console (port 9990). I installed the Lynx browser on my Pi so I could test locally (it's a headless instance). Lynx takes a bit of getting used to, but:
Validating your CapeDwarf installation isn't as simple as it could be at the moment, but I know the team are going to look into this along with a worked example. So for now we'll follow the instructions on the github repo and check out capedwarf-shared and capedwarf-blue. Build -shared first. Initially ...
mvn clean install -Dmaven.repo.local=~/mavenrepo
However, it turns out that using ~ causes problems, so let's redo using an absolute path this time:
mvn clean install -Dmaven.repo.local=/home/pi/mavenrepo
Next, we build -blue using the same local maven repo:
mvn -U clean install -Dmaven.repo.local=/home/pi/mavenrepo
Now initially this kept failing after 30 minutes or so …
After some investigation it turns out this is due to name resolution within the Pi itself. Either ensure your Pi is registered in a DNS somewhere or edit /etc/hosts to set 127.0.0.1 to the name of the Pi. Then rebuild.
And after an hour or so (the Pi isn't fast!):
And that's about it for now. We've got GAE (via CapeDwarf) up and running on a Pi, as well as verifying the installation. Next steps would be to build and deploy some applications, but that'll have to wait for now.
Labels:
CapeDwarf,
GAE,
Google App Engine,
JBoss Application Server,
Raspberry Pi,
WildFly
Tuesday, October 01, 2013
HPTS 2013
Just back from a JCP-EC meeting, JavaOne and HPTS. Whilst I enjoyed them all, HPTS has to be my favourite. Unfortunately this year its schedule conflicted with JavaOne so I wasn't able to attend either event fully. But even just the 3 days that I was at HPTS were well worth the trip: it's a great workshop where you get the chance to meet people from all areas of our industry and talk without fear of confidentiality. "What's said at HPTS stays at HPTS".
I had the privilege of presenting again this year, on the topic of transactions, NoSQL and Big Data. I was also chairing a session immediately afterwards on a range of topics including hardware transactional memory. Overall the sessions are great, but it's the dinner and drink discussions that are the real value around the workshop. And it's a great chance to catch up with friends I tend to only see once every two years!
I had the privilege of presenting again this year, on the topic of transactions, NoSQL and Big Data. I was also chairing a session immediately afterwards on a range of topics including hardware transactional memory. Overall the sessions are great, but it's the dinner and drink discussions that are the real value around the workshop. And it's a great chance to catch up with friends I tend to only see once every two years!
Labels:
big data,
compensation transactions,
hpts,
NoSQL,
transactions
Saturday, August 17, 2013
Your smartphone evolving?
I've been using a combination of smartphones over the past few years from a range of vendors and a range of operating systems. Problems I've had with all of them as well as the recent move by Ubuntu, have got me thinking: What do people want to do with their pads? Playing games is fine, but even then wouldn't it be better if you didn't have to code for each platform (iOS, Android, XBox, PS3 ...)? We've spent a lot of years working on productivity software, such as Word, Powerpoint, Eclipse etc. and today the equivalents for pads are woefully inadequate. Of course we are unlikely to have to wait as long for them to improve on pads as they did on PCs, but it's still a waste of time and energy! And running services off the device is not only a waste of compute power/bandwidth, it assumes the network is always present, which it often isn't.
I hate to admit this, but maybe Microsoft have it right in a way with the Surface running almost a stock version of Windows so that the same applications that run on the laptop/desktop can run on their pad, and vice versa. Now maybe Apple will eventually do the same thing with iOS, but Android doesn't provide a migration path from or to the desktop. In the end this may well be a significant limiting factor for Android and one which Google will find very hard to get around, without perhaps adopting standard Java.
Of course applications need to be aware of the environment on which they run so they can take advantage of the form factor, network connectivity etc. There may well be applications that simply do not, or should not, be expected to work on the complete range of deployment environments (phone, pad, laptop, desktop etc.) But are they the exception or the norm? I believe they are the exception: most of the applications that run on my laptop are ones I'd like to run on the pad; most of the things I do on my pad I'd like the option of doing on my laptop or phone, particularly now it has a 5" screen.
What does this mean for the "open source" pad and phone market? I believe that unless Android actually allows for a wider variety of un-modified Linux-based applications to run on it, then it risks becoming marginalised. OK, this may be a strange thing to discuss when all we hear on an almost weekly basis is that Android market share is growing, but look at Apple in the 1980's before Windows came along. In fact Android's biggest threat could well come from the pure Linux pads/phones that we are beginning to see enter the market: the pads and phones can run stock Java and if Android is a requirement then there's always virtualisation. I think that the platform that has the best chance of winning (adoption/relevancy) is the one which most closely matches the OS that we use on our desktops.
Thursday, August 08, 2013
CloudCom 2013
I'm on the PC.
Call for Papers
The “Cloud” is a natural evolution of distributed computing and of the widespread adaption of virtualization and SOA. In Cloud Computing, IT-related capabilities and resources are provided as services, via the Internet and on-demand, accessible without requiring detailed knowledge of the underlying technology. The IEEE International Conference and Workshops on Cloud Computing Technology and Science, steered by the Cloud Computing Association, aim to bring together researchers who work on cloud computing and related technologies.
Manuscripts need to be prepared according to IEEE CS format. For regular papers, the page limit will be 8 pages. Authors of accepted papers will be asked to present in a plenary session.
Manuscripts need to be prepared according to the IEEE CS format (Format Link)
For regular papers, the page limit will be 8 pages. (submission deadline: August 7)
(If the paper is accepted as a short paper, the page limit for final camera ready will be 6 pages.)
For workshops and the Ph.D. consortium, the page limit will be 6 pages. (submission deadline: August 7)
For poster and demo papers, the page limit will be 4 pages. (submission deadline: August 7)
The IEEE CloudCom 2013 submission site is: https://www.easychair.org/conferences/?conf=ieeecloudcom2013
All accepted papers will be published by IEEE CS Press (IEEE Xplore) and indexed by EI and ISSN.
IEEE Transactions on Cloud Computing (TCC: http://computer.org/TCC) is organising a Special Issue which encourages submission of revised and extended versions of 2-3 best/top rated papers in the area of Cloud Computing from our conference. The special issue also seeks direct submission of papers that present 'new' ideas for the first time in TCC. All papers will be peer-reviewed and selected competitively based on their originality and merit as per requirement of TCC. All queries on this special issue should be directed to its guest editors. Details on this special issue will be informed about in a separate Call for Papers at:
http://www.computer.org/cms/Computer.org/transactions/cfps/cfp_tcc_ucc-st.pdf
Topics include but are not limited to:
Architecture:
Cloud Services models (IaaS, PaaS, SaaS)
Cloud services reference models and standardisation
Intercloud architecture models
Cloud federation and hybrid cloud infrastructure
Cloud services provisioning and management
Cloud services delivery models, campus integration and “last mile” issues
Networking technologies for data centers, intracloud and interclouds
Cloud powered services design
Programming models and systems/tools
Cloud system design with FPGA, GPU, APU
Monitoring, management and maintenance
Operational, economic and business models
Green data centers
Business processes, compliance and certification
Dynamic resource provisioning
Big Data:
Machine learning
Data mining
Approximate and scalable statistical methods
Graph algorithms
Querying and search
Data Lifecycle Management for Big Data (sources, cleansing, federation, preservation, privacy, etc.)
Frameworks, tools and their composition
Storage and analytic architectures
Performance and debugging
Hardware optimizations for Big Data (multi-core, GPU, networking, etc.)
Data Flow management and scheduling
Security and Privacy:
Accountability
Audit in clouds
Authentication and authorization
Cloud integrity and binding issues
Cryptography for/ in the cloud
Hypervisor security
Identity/ Security as a Service
Prevention of data loss or leakage
Secure, interoperable identity in the Cloud
Security and privacy in clouds
Trust and credential management
Trusted Computing in Cloud Computing
Usability and security
Services and Applications:
Security services on the Cloud
Data management applications and services on the Cloud
Scheduling and application workflows on the Cloud
Cloud application benchmarks
Cloud-based services and protocols
Cloud model and framework
Cloud-based storage and file systems
Cloud scalability and performance
Fault-tolerance of cloud services and applications
Application development and debugging tools
Business models and economics of Cloud services
Services for improving Cloud application availability
Use cases of Cloud applications
Virtualization:
Server, storage, network virtualization
Resource monitoring
Virtual desktop
Resilience, fault tolerance
Modeling and performance evaluation
Security aspects
Enabling disaster recovery, job migration
Energy efficient issues
HPC on Cloud:
Load balancing for HPC clouds
Middleware framework for HPC clouds
Scalable scheduling for HPC clouds
HPC as a Service
Performance Modeling and Management
Programming models for HPC clouds
HPC cloud applications ; Use cases, experiences with HPC clouds
Cloud deployment systems for HPC clouds
GPU on the Cloud
IoT and Mobile on Cloud:
IoT cloud architectures, models
Cloud-based dynamic composition of IoT applications and services
Cloud-based context-aware IoT applications and services
Mobile cloud architectures and models
Green mobile cloud computing
Resource management in mobile cloud environments
Cloud support for mobility-aware networking and protocols
Multimedia applications in mobile cloud environments
Security, privacy and trust in mobile IoT clouds
Cloud-based mobile networks and applications, e.g., cloud-based mobile social networks, cloud-based vehicle networks, and cloud-based ehealthcare networks
Call for Papers
The “Cloud” is a natural evolution of distributed computing and of the widespread adaption of virtualization and SOA. In Cloud Computing, IT-related capabilities and resources are provided as services, via the Internet and on-demand, accessible without requiring detailed knowledge of the underlying technology. The IEEE International Conference and Workshops on Cloud Computing Technology and Science, steered by the Cloud Computing Association, aim to bring together researchers who work on cloud computing and related technologies.
Manuscripts need to be prepared according to IEEE CS format. For regular papers, the page limit will be 8 pages. Authors of accepted papers will be asked to present in a plenary session.
Manuscripts need to be prepared according to the IEEE CS format (Format Link)
For regular papers, the page limit will be 8 pages. (submission deadline: August 7)
(If the paper is accepted as a short paper, the page limit for final camera ready will be 6 pages.)
For workshops and the Ph.D. consortium, the page limit will be 6 pages. (submission deadline: August 7)
For poster and demo papers, the page limit will be 4 pages. (submission deadline: August 7)
The IEEE CloudCom 2013 submission site is: https://www.easychair.org/conferences/?conf=ieeecloudcom2013
All accepted papers will be published by IEEE CS Press (IEEE Xplore) and indexed by EI and ISSN.
IEEE Transactions on Cloud Computing (TCC: http://computer.org/TCC) is organising a Special Issue which encourages submission of revised and extended versions of 2-3 best/top rated papers in the area of Cloud Computing from our conference. The special issue also seeks direct submission of papers that present 'new' ideas for the first time in TCC. All papers will be peer-reviewed and selected competitively based on their originality and merit as per requirement of TCC. All queries on this special issue should be directed to its guest editors. Details on this special issue will be informed about in a separate Call for Papers at:
http://www.computer.org/cms/Computer.org/transactions/cfps/cfp_tcc_ucc-st.pdf
Topics include but are not limited to:
Architecture:
Cloud Services models (IaaS, PaaS, SaaS)
Cloud services reference models and standardisation
Intercloud architecture models
Cloud federation and hybrid cloud infrastructure
Cloud services provisioning and management
Cloud services delivery models, campus integration and “last mile” issues
Networking technologies for data centers, intracloud and interclouds
Cloud powered services design
Programming models and systems/tools
Cloud system design with FPGA, GPU, APU
Monitoring, management and maintenance
Operational, economic and business models
Green data centers
Business processes, compliance and certification
Dynamic resource provisioning
Big Data:
Machine learning
Data mining
Approximate and scalable statistical methods
Graph algorithms
Querying and search
Data Lifecycle Management for Big Data (sources, cleansing, federation, preservation, privacy, etc.)
Frameworks, tools and their composition
Storage and analytic architectures
Performance and debugging
Hardware optimizations for Big Data (multi-core, GPU, networking, etc.)
Data Flow management and scheduling
Security and Privacy:
Accountability
Audit in clouds
Authentication and authorization
Cloud integrity and binding issues
Cryptography for/ in the cloud
Hypervisor security
Identity/ Security as a Service
Prevention of data loss or leakage
Secure, interoperable identity in the Cloud
Security and privacy in clouds
Trust and credential management
Trusted Computing in Cloud Computing
Usability and security
Services and Applications:
Security services on the Cloud
Data management applications and services on the Cloud
Scheduling and application workflows on the Cloud
Cloud application benchmarks
Cloud-based services and protocols
Cloud model and framework
Cloud-based storage and file systems
Cloud scalability and performance
Fault-tolerance of cloud services and applications
Application development and debugging tools
Business models and economics of Cloud services
Services for improving Cloud application availability
Use cases of Cloud applications
Virtualization:
Server, storage, network virtualization
Resource monitoring
Virtual desktop
Resilience, fault tolerance
Modeling and performance evaluation
Security aspects
Enabling disaster recovery, job migration
Energy efficient issues
HPC on Cloud:
Load balancing for HPC clouds
Middleware framework for HPC clouds
Scalable scheduling for HPC clouds
HPC as a Service
Performance Modeling and Management
Programming models for HPC clouds
HPC cloud applications ; Use cases, experiences with HPC clouds
Cloud deployment systems for HPC clouds
GPU on the Cloud
IoT and Mobile on Cloud:
IoT cloud architectures, models
Cloud-based dynamic composition of IoT applications and services
Cloud-based context-aware IoT applications and services
Mobile cloud architectures and models
Green mobile cloud computing
Resource management in mobile cloud environments
Cloud support for mobility-aware networking and protocols
Multimedia applications in mobile cloud environments
Security, privacy and trust in mobile IoT clouds
Cloud-based mobile networks and applications, e.g., cloud-based mobile social networks, cloud-based vehicle networks, and cloud-based ehealthcare networks
Tuesday, August 06, 2013
WS-TX Technical Committee closes its doors ...
As Ian Robinson put it ...
"Per section 2.15 of the TC Process [1] "Closing a TC", the TC has decided by a full majority vote of the TC membership to close the WS-Tx Technical Committee. The TC successfully delivered 3 OASIS open standard specifications:
WS-Coordination
WS-AtomicTransaction
WS-BusinessActivity
We delivered 2 versions of the standard, the most recent being V1.2 in 2009.
There are a number of mature implementations of these specifications and no outstanding issues being discussed by the TC.
In the early days of the TC we had many excellent discussions to tie down and deliver a tight set of specifications; we had some great face to face meetings in the US and Europe and developed some great working relationships which I think have (mostly) survived the duration of the TC.
Our work is done. Victory is declared. Thanks to everyone for making it both successful and enjoyable.
[1] https://www.oasis-open.org/policies-guidelines/tc-process"
"Per section 2.15 of the TC Process [1] "Closing a TC", the TC has decided by a full majority vote of the TC membership to close the WS-Tx Technical Committee. The TC successfully delivered 3 OASIS open standard specifications:
WS-Coordination
WS-AtomicTransaction
WS-BusinessActivity
We delivered 2 versions of the standard, the most recent being V1.2 in 2009.
There are a number of mature implementations of these specifications and no outstanding issues being discussed by the TC.
In the early days of the TC we had many excellent discussions to tie down and deliver a tight set of specifications; we had some great face to face meetings in the US and Europe and developed some great working relationships which I think have (mostly) survived the duration of the TC.
Our work is done. Victory is declared. Thanks to everyone for making it both successful and enjoyable.
[1] https://www.oasis-open.org/policies-guidelines/tc-process"
Labels:
compensation transactions,
JBossTS,
oasis,
ws-tx
Friday, July 19, 2013
Disappointed
Went to see the World War Z movie yesterday. I'm a big fan of the book and was looking forward to the movie, even though I knew it was only loosely based on the book. I'm glad I read the book first though, because if I had seen the movie first I probably wouldn't have bothered reading it! What a disappointment: alright I understand that it may be difficult to translate 1-to-1 the book into a movie given how it's written (I won't give away any spoilers here), but there were so many missed opportunities by the makers of the movie to really create something fantastic. Whether or not you like the film, if you haven't read the book then I thoroughly recommend it!
Saturday, May 25, 2013
Gaming moving to the next phase
I've been talking about the impact of ubiquitous computing for a while now and specifically what this could mean for middleware. During at least one of the presentations I've given on the subject I've said that I think the way hardware and software are evolving means that the next generation of dedicated gaming consoles is likely to be the last generation of dedicated gaming consoles: hardware in phones and tablets today are at least as powerful as the PS3 and XBox 360 and many of the games you can run on your phone illustrate this quite nicely.
So it was interested that when I was at Google I/O last week to hear the announcement about Google Games. It was disappointing that the keynote demo didn't work out for them at the time, but that's always a possibility with live demonstrations. However, the concept is sound and I expect this to grow to a point where any Android device can participate in cooperative or competitive near real-time multi-player games. Now if only we can persuade Google that they don't need to reinvent the middleware components necessary to make this a reality.
So it was interested that when I was at Google I/O last week to hear the announcement about Google Games. It was disappointing that the keynote demo didn't work out for them at the time, but that's always a possibility with live demonstrations. However, the concept is sound and I expect this to grow to a point where any Android device can participate in cooperative or competitive near real-time multi-player games. Now if only we can persuade Google that they don't need to reinvent the middleware components necessary to make this a reality.
Monday, May 06, 2013
Subscribe to:
Posts (Atom)