Given my background in fault tolerant distributed systems, performance metrics are something that come up time and time again. I've spent many a happy hour, day, week, month pouring over timings, stack dumps etc. to figure out how to make RPCs go faster, replication to scale and transaction logs to perform. (Funnily enough I was doing the latter again over Christmas!) But this is something we've all had to do when working on Arjuna.
If you've ever had to do performance tuning and testing then you'll know that it can be fun, but often an almost never ending task. Then there's the comparisons between different implementations. This isn't just a vendor-specific area: even in academia there's always a bit of "mine is faster than yours" mentality. I suppose it's human nature and in some ways it's good and pushes you to improve. However, performance measurements need to be taken in a scientific manner otherwise they are useless. It's no good to simply state that you get "X transactions a second" if you don't specify the conditions under which that figure was achieved. This is true for a number of performance metrics, not just in the computing industry: there's a reason world records in athletics have defined conditions, for instance.
Now of course you may not have the same hardware to test and the tests themselves can be notoriously hard to access from one vendor/competitor to another. What tuning parameters they use as well as the general configuration can also be difficult to obtain. This is why Jim Gray and others wrote A Measure of Transaction Processing Power (though the principles in there can be applied much more widely.) This eventually produced the TPPC. But the general principle here is a scientific approach to performance metrics is the right way (though it's still possible to skew them in your favour.)
Whenever I see or hear of performance figures I always want to know the conditions (hardware, configuration, wind conditions etc.) It's not possible for me to take them seriously unless I have this additional data. Giving out the tests themselves is also a good thing. However, performance is not something you can support like you do with, say, transactions, where you either are transactional or you're not. With performance it can be a never ending struggle to keep improving, especially as hardware and software improve and your competitors get better too; you've always got performance even if it's bad! Performance is something that for some users can be as critical as new capabilities, or even more so. But it can't be performance at the cost of reliability or robustness: something that screams along faster than a speeding bullet yet crashes after 24 hours is unlikely to be as useful as something that is half as fast yet able to stay available indefinitely.
I work for Red Hat, where I lead JBoss technical direction and research/development. Prior to this I was SOA Technical Development Manager and Director of Standards. I was Chief Architect and co-founder at Arjuna Technologies, an HP spin-off (where I was a Distinguished Engineer). I've been working in the area of reliable distributed systems since the mid-80's. My PhD was on fault-tolerant distributed systems, replication and transactions. I'm also a Professor at Newcastle University and Lyon.
Wednesday, June 24, 2009
Tuesday, June 23, 2009
Internet dial tone?
The other day I had problems with my ISP that meant my connection to the rest of the world was up and down like a yo-yo. I self diagnosed the problem (the ISP help desk was pretty useless) and in the course of that I realized something: where's the equivalent of the telephone dial tone for the internet? Well if you're like me it's either Google or BBC news. Over the years whenever I need to determine if there's a problem with my connection or some site I'm trying to use, I'll use one of these sites to double check my own connectivity. OK it's not exactly a science, but I assume if I can talk to one or other of them then the fault lies elsewhere.
Wednesday, June 17, 2009
CFP for MW4SOC 2009
4th Middleware for Service-Oriented Computing (MW4SOC)
Workshop at the ACM/IFIP/USENIX Middleware Conference
Nov 30 – Dec 4, 2009
Urbana Champaign, Illinois, USA
http://www.dedisys.org/mw4soc09/
This workshop has its own ISBN and will be published as part of the ACM International Conference Proceedings Series and will be included in the ACM digital library.
Important Dates
===============
Paper submission: August 1, 2009
Author notification: September 15, 2009
Camera-ready copies: October 1, 2009
Workshop date: November 30, 2009
Call details
============
Service Oriented Computing (SOC) is a computing paradigm broadly pushed by vendors, utilizing and providing services to support the rapid and scalable development of distributed applications in heterogeneous environments. However, the influence of SOC today goes far beyond the concepts of the original disciplines that spawned it. Many would argue that areas like business process modelling and management, Web2.0-style applications, data as a service, and even cloud computing emerge mainly due to the shift in paradigm towards SOC. Nevertheless, there is still a strong need to merge technology with an understanding of business processes and organizational structures, a combination of recognizing an enterprise's pain points and the potential solutions that can be applied to correct them.
While the immediate need of middleware support for SOC is evident, current approaches and solutions still fall short by primarily providing support for only the EAI aspect of SOC and do not sufficiently address issues such as service discovery, re-use, re-purpose, composition and aggregation support, service management, monitoring, and deployment and maintenance of large-scale heterogeneous infrastructures and applications. Moreover, quality properties (in particular dependability and security) need to be addressed not only by interfacing and communication standards, but also in terms of integrated middleware support. Recently, massive-scale and mobility were added to the challenges for Middleware for SOC.
The workshop consequently welcomes contributions on how specifically service oriented middleware can address the above challenges, to what extent it has to be service oriented by itself, and in particular how quality properties are supported.
Topics of interest
==================
* Architectures and platforms for Middleware for SOC.
* Core Middleware support for deployment, composition, and interaction.
* Integration of SLA (service level agreement) and/or technical policy support through middleware.
* Middleware support for service management, maintenance, monitoring, and control.
* Middleware support for integration of business functions and organizational structures into Service oriented Systems (SOS).
* Evaluation and experience reports of middleware for SOC and service oriented middleware.
Workshop co-chairs
===============
Karl M. Göschka (chair)
Schahram Dustdar
Frank Leymann
Helen Paik
Organizational chair
====================
Lorenz Froihofer, mw4soc@dedisys.org
Program committee
=================
Sami Bhiri, DERI (Ireland)
Paul Brebner, NICTA (Australia)
Gianpaolo Cugola, Politecnico di Milano (Italy)
Francisco Curbera, IBM (USA)
Frank Eliassen, University of Oslo (Norway)
Walid Gaaloul, Institut Telecom (France)
Harald C. Gall, Universität Zürich (Switzerland)
Nikolaos Georgantas, INRIA (France)
Chirine Ghedira, Univ. of Lyon I (France)
Svein Hallsteinsen, SINTEF (Norway)
Yanbo Han, ICT Chinese Academy of Sciences (China)
Valérie Issarny, INRIA (France)
Arno Jacobsen, Univ. Toronto (Canada)
Mehdi Jazayeri, Università della Svizzera Italiana (Switzerland)
Bernd Krämer, University of Hagen (Germany)
Mark Little, JBoss (USA)
Heiko Ludwig, IBM Research (USA)
Hamid Reza Motahari Nezhad, HP Labs (USA)
Nanjangud C. Narendra, IBM Research (India)
Rui Oliveira, Universidade do Minho (Portugal)
Cesare Pautasso, Università della Svizzera Italiana (Switzerland)
Fernando Pedone, Università della Svizzera Italiana (Switzerland)
Jose Pereira, Universidade do Minho (Portugal)
Florian Rosenberg, Vienna University of Technology (Austria)
Regis Saint-Paul, CREATE-NET (Italy)
Dietmar Schreiner, Vienna University of Technology (Austria)
Bruno Schulze, National Lab for Scientific Computing (Brazil)
Stefan Tai, Institut für Angewandte Informatik und Formale Beschreibungsverfahren - AIFB, Karlsruhe (Germany)
Aad van Moorsel, University of Newcastle (UK)
Eric Wohlstadter, University of British Columbia (Canada)
Raymond Wong, UNSW (Australia)
Roman Vitenberg, University of Oslo (Norway)
Liming Zhu, NICTA (Australia)
Workshop at the ACM/IFIP/USENIX Middleware Conference
Nov 30 – Dec 4, 2009
Urbana Champaign, Illinois, USA
http://www.dedisys.org/mw4soc09/
This workshop has its own ISBN and will be published as part of the ACM International Conference Proceedings Series and will be included in the ACM digital library.
Important Dates
===============
Paper submission: August 1, 2009
Author notification: September 15, 2009
Camera-ready copies: October 1, 2009
Workshop date: November 30, 2009
Call details
============
Service Oriented Computing (SOC) is a computing paradigm broadly pushed by vendors, utilizing and providing services to support the rapid and scalable development of distributed applications in heterogeneous environments. However, the influence of SOC today goes far beyond the concepts of the original disciplines that spawned it. Many would argue that areas like business process modelling and management, Web2.0-style applications, data as a service, and even cloud computing emerge mainly due to the shift in paradigm towards SOC. Nevertheless, there is still a strong need to merge technology with an understanding of business processes and organizational structures, a combination of recognizing an enterprise's pain points and the potential solutions that can be applied to correct them.
While the immediate need of middleware support for SOC is evident, current approaches and solutions still fall short by primarily providing support for only the EAI aspect of SOC and do not sufficiently address issues such as service discovery, re-use, re-purpose, composition and aggregation support, service management, monitoring, and deployment and maintenance of large-scale heterogeneous infrastructures and applications. Moreover, quality properties (in particular dependability and security) need to be addressed not only by interfacing and communication standards, but also in terms of integrated middleware support. Recently, massive-scale and mobility were added to the challenges for Middleware for SOC.
The workshop consequently welcomes contributions on how specifically service oriented middleware can address the above challenges, to what extent it has to be service oriented by itself, and in particular how quality properties are supported.
Topics of interest
==================
* Architectures and platforms for Middleware for SOC.
* Core Middleware support for deployment, composition, and interaction.
* Integration of SLA (service level agreement) and/or technical policy support through middleware.
* Middleware support for service management, maintenance, monitoring, and control.
* Middleware support for integration of business functions and organizational structures into Service oriented Systems (SOS).
* Evaluation and experience reports of middleware for SOC and service oriented middleware.
Workshop co-chairs
===============
Karl M. Göschka (chair)
Schahram Dustdar
Frank Leymann
Helen Paik
Organizational chair
====================
Lorenz Froihofer, mw4soc@dedisys.org
Program committee
=================
Sami Bhiri, DERI (Ireland)
Paul Brebner, NICTA (Australia)
Gianpaolo Cugola, Politecnico di Milano (Italy)
Francisco Curbera, IBM (USA)
Frank Eliassen, University of Oslo (Norway)
Walid Gaaloul, Institut Telecom (France)
Harald C. Gall, Universität Zürich (Switzerland)
Nikolaos Georgantas, INRIA (France)
Chirine Ghedira, Univ. of Lyon I (France)
Svein Hallsteinsen, SINTEF (Norway)
Yanbo Han, ICT Chinese Academy of Sciences (China)
Valérie Issarny, INRIA (France)
Arno Jacobsen, Univ. Toronto (Canada)
Mehdi Jazayeri, Università della Svizzera Italiana (Switzerland)
Bernd Krämer, University of Hagen (Germany)
Mark Little, JBoss (USA)
Heiko Ludwig, IBM Research (USA)
Hamid Reza Motahari Nezhad, HP Labs (USA)
Nanjangud C. Narendra, IBM Research (India)
Rui Oliveira, Universidade do Minho (Portugal)
Cesare Pautasso, Università della Svizzera Italiana (Switzerland)
Fernando Pedone, Università della Svizzera Italiana (Switzerland)
Jose Pereira, Universidade do Minho (Portugal)
Florian Rosenberg, Vienna University of Technology (Austria)
Regis Saint-Paul, CREATE-NET (Italy)
Dietmar Schreiner, Vienna University of Technology (Austria)
Bruno Schulze, National Lab for Scientific Computing (Brazil)
Stefan Tai, Institut für Angewandte Informatik und Formale Beschreibungsverfahren - AIFB, Karlsruhe (Germany)
Aad van Moorsel, University of Newcastle (UK)
Eric Wohlstadter, University of British Columbia (Canada)
Raymond Wong, UNSW (Australia)
Roman Vitenberg, University of Oslo (Norway)
Liming Zhu, NICTA (Australia)
Sunday, June 14, 2009
The next wave?
I've been asked several times over the past few months what I think will be the next technology wave. If I knew that I'd be writing this entry from my own personal island in the sun! I've been involved in a few of these technology waves over the years, both as a user and a definer, but predicting them is another thing. For instance, who would have guessed at the turn of last century that SOAP would have taken on the major role it has today? Or that open source would have become such a defining wave?
I have my own theory for how technology waves begin and it wasn't until I watched a program from the BBC on rogue waves for the second time that I found a decent analogy. The relatively new theory (based on Schrodingers Equation) goes that these super waves, which are big enough to sink ships, are formed when energy is "stolen" from one wave to feed another. This builds and builds to create these towering monsters. Well I think technology waves are very similar: something relatively innocuous, such as SOAP, pulls in energy from other fields, such as EAI and the Web, to grow to the scale of a disruptive influence when it reaches a tipping point. This also makes it difficult to predict a priori whether or not something will be a wave: many different things contribute.
With that in mind, what do I think will influence the next technology waves? Here are a few ideas though not all. I should add a disclaimer that these are my own personal opinions and not necessarily those of my employer:
I have my own theory for how technology waves begin and it wasn't until I watched a program from the BBC on rogue waves for the second time that I found a decent analogy. The relatively new theory (based on Schrodingers Equation) goes that these super waves, which are big enough to sink ships, are formed when energy is "stolen" from one wave to feed another. This builds and builds to create these towering monsters. Well I think technology waves are very similar: something relatively innocuous, such as SOAP, pulls in energy from other fields, such as EAI and the Web, to grow to the scale of a disruptive influence when it reaches a tipping point. This also makes it difficult to predict a priori whether or not something will be a wave: many different things contribute.
With that in mind, what do I think will influence the next technology waves? Here are a few ideas though not all. I should add a disclaimer that these are my own personal opinions and not necessarily those of my employer:
- A new programming language? Over the past 40 years or more we've seen languages come and go. The only constant is binary! Periodically we go back to high-level versus low-level language debates (e.g., can compilers really optimize as well as writing raw machine code?) Java has been influential for a long time, but if history has taught us anything it's that everything has a season, so it's only natural that at some point Java popularity will wane. But I'm not sure that a single language will replace it. Java didn't replace C, C++, COBOL or Lisp, for example. With the increase popularity of languages such as Erlang, Scala and even C++ making a comeback, variety is the right approach. Yet again a case of one-size doesn't fit all. When I was at University we learnt a lot of different languages, such as Pascal, Occam, Concurrent Euclid, Ada, C, C++, Lisp, Prolog, Forth, Fortran, 6502 (still one of my favourites), 68K, etc. When Java came along that seemed to change, with the focus more on the single language. Hopefully we'll go full circle, because as an industry we can't afford to keep reinventing software every 10 years to suit a new language.
- What about Cloud/Virtualization? Yes, there's a lot of hype in this area and I do think it offers some very interesting possibilities. But I'm not sure it's a wave in its own right. I suspect that we're still missing something to turn this into a Super Technology Wave. That could be SOA, fault tolerance, the economy, or Skynet.
- It will finally dawn on the masses that security (including identity, trust etc.) is something we take for granted and yet is not available (at all or sufficiently) in many of the things we need. (See Cloud, for example.) Security as an after thought should be replaced with security as an integral part of whatever we do. Yet again not a wave in and of itself, but something that will be pulled into one I hope.
- Of course REST has been around for a long time and given the many discussions and debates that have been raging for the past few years we're definitely seeing more and more take-up. I'm not going to debate the pros and cons here (have done that before), but I am sure this will become a wave, if it's not already.
- A unified modeling approach to building distributed systems, pulling together events, messages, etc. JJ has been talking about this for a while and it would be nice to see.
So there are a few thoughts. There are more, such as around designing for testability and HCI, but those can wait for another time. Maybe some or all of the above will be sucked into the new technology wave. Maybe none of them will. That's why I'm not sat writing this on some sun-soaked beach on my own private island!
Friday, June 12, 2009
Utter disbelief
I've been involved with standards since the early 1990s, working within the OMG, OASIS, GGF, W3C and others. They all have their rules and regulations, and most of the time they technical committees are populated by the vendors. That's not because these organizations are closed to end-users, but I think many of those end-users believe it's a lot of time and effort to participate. They'd be right.
Over those years I've been involved in a lot of political wrangling. You have to know when to push and when to give in. Priorities are important and today's foe could be tomorrow's ally. Sometimes people can get very emotional about this proposal or that, trying hard to justify why it should be voted for or against. That's good, because passion is important in what we do. But sometimes the arguments can be very fatuous. However, it wasn't until the other day though, that I realized I hadn't heard all of them!
To protect individuals I won't say what meeting it was, but suffice it to say that there was a face-to-face meeting recently that I attended by phone. Because of the time differences it meant I was doing my normal work from early in the morning my time and then in the evening jumping on a call for another 6 hours. For 3 days running! Anyway, during the 2nd day one of the vendors was trying to get their point across to everyone else and failing: pretty much everyone had decided to vote against, but this vendor kept trying hard. I had assumed that because this vendor (name withheld!) has been involved with standards for a while they knew the rules of the game. But apparently not, or at least the people representing them didn't.
So when it became pretty obvious that they were going to lose (this standards group uses the rule that one vendor has one vote, no matter how many employees it has on the working group) the cry went up "But what about the thousands of customers we have who want things our way?" Huh?! WTF?! First of all that's a very subjective statement. How do we know those customers exist and what they really want? If we were to have to take them all into account then we'd have to solicit their responses on all votes. Secondly if we go that route it'll become a "customer war", with vendor A saying "I've got 1000 customers" and vendor B saying "well I've got 1001!" This kind of weighted voting doesn't work! If customers really want a say then they can sign up to any of the standards groups. In fact I'd really encourage them to do that, because it makes a lot of difference: the best standards all have customer input!
After I heard this and stopped choking, I realized that it was really a case of desperation on the part of this vendor. Hopefully they'll read the rules before they turn up next time.
Over those years I've been involved in a lot of political wrangling. You have to know when to push and when to give in. Priorities are important and today's foe could be tomorrow's ally. Sometimes people can get very emotional about this proposal or that, trying hard to justify why it should be voted for or against. That's good, because passion is important in what we do. But sometimes the arguments can be very fatuous. However, it wasn't until the other day though, that I realized I hadn't heard all of them!
To protect individuals I won't say what meeting it was, but suffice it to say that there was a face-to-face meeting recently that I attended by phone. Because of the time differences it meant I was doing my normal work from early in the morning my time and then in the evening jumping on a call for another 6 hours. For 3 days running! Anyway, during the 2nd day one of the vendors was trying to get their point across to everyone else and failing: pretty much everyone had decided to vote against, but this vendor kept trying hard. I had assumed that because this vendor (name withheld!) has been involved with standards for a while they knew the rules of the game. But apparently not, or at least the people representing them didn't.
So when it became pretty obvious that they were going to lose (this standards group uses the rule that one vendor has one vote, no matter how many employees it has on the working group) the cry went up "But what about the thousands of customers we have who want things our way?" Huh?! WTF?! First of all that's a very subjective statement. How do we know those customers exist and what they really want? If we were to have to take them all into account then we'd have to solicit their responses on all votes. Secondly if we go that route it'll become a "customer war", with vendor A saying "I've got 1000 customers" and vendor B saying "well I've got 1001!" This kind of weighted voting doesn't work! If customers really want a say then they can sign up to any of the standards groups. In fact I'd really encourage them to do that, because it makes a lot of difference: the best standards all have customer input!
After I heard this and stopped choking, I realized that it was really a case of desperation on the part of this vendor. Hopefully they'll read the rules before they turn up next time.
Tuesday, June 09, 2009
Web 2.0 architectures
One of the high points of any JavaOne that I go to is getting to meet up with friends such as Duane and James. As always we have fun catching up over a few drinks and good food (hopefully the pictures of this year's "event" won't turn up!) But this year was more special than usual because their book is out! It's been a long time in the works but it is well worth the wait. If you're at all interested in Web 2.0 then take a look at this book!
Friday, June 05, 2009
Sun-set
I've no idea if this is the last JavaOne as some rumours suggest (others that it may simply become an add-on to Oracle World), but it is with some sadness that I think about that and the acquisition of Sun by Oracle. Every time I think of this I keep hearing the words of Don't Let The Sun Go Down On Me (used to great effect in The Lost Boys.)
Putting aside my role within Red Hat, I've always had a soft-spot for Sun going back to the start of my career. In one way or another I've worked with Sun hardware and software for over 20 years. Back then, in the mid-1980's, the workstation was really starting to come into its own as the old batch-driven mainframe systems tailed off. My first work computer was a Whitechapel, the UK attempt to enter that market. A great machine, particularly if you were new to administering a multi-user Unix machine.
But in the Arjuna project the real king of machines was the Sun 3/60 with SunOS. For years we all strove to own one, either as a hand-me-down or when we got new funding. We went through to most of the Sun-4 series and into the Solaris years, with each year bringing a new top-of-the-line machine for one of us to manage (these were multi-user machines of course). Our project was so prominent within the University that others followed our lead where they could and where they couldn't they looked on slightly enviously.
During that time we often spoke with various Sun engineers in the hardware and software groups, e.g., Jim Waldo. This was on things as varied as transactions (not surprising really) through to operating systems (we got one of the first drops of the Spring operating system) and distributed computing. Then when Sun released Oak and eventually Java it was a natural extension for us to get involved with that heavily.
It was in the early 1990's that things started to change though. We were using Linux from the first public release Linus made (I remember Stuart and I taking it home one evening to replace Minix that we'd been running on the Atari's). Then when the project bought us Pentium machines, Stuart made the switch from a Sparc 2 running Solaris. He and I ran various benchmarks against my Sparc and for most of the things we needed at the time (compiling, execution etc.) the Pentium/Linux combination was faster as well as being a lot cheaper! The rest, as they say, is history. Year on year the PC and Linux combo got faster and faster, laptops came into this too pretty early, and the project steadily moved away from everything Sun. By the end of the 1990's the last few Sun workstations that were still in use were several years old and in the minority.
From conversations with others I've had over the years I think this is a very similar pattern to elsewhere. Was there anything Sun could do about this? Maybe and I'm sure that question will be debated for years to come. But for now it is with sadness that I contemplate a world without Sun. They helped shape what I am today from hardware, operating system, programming language(s) and so much more. In that regard I will always be in their dept.
Putting aside my role within Red Hat, I've always had a soft-spot for Sun going back to the start of my career. In one way or another I've worked with Sun hardware and software for over 20 years. Back then, in the mid-1980's, the workstation was really starting to come into its own as the old batch-driven mainframe systems tailed off. My first work computer was a Whitechapel, the UK attempt to enter that market. A great machine, particularly if you were new to administering a multi-user Unix machine.
But in the Arjuna project the real king of machines was the Sun 3/60 with SunOS. For years we all strove to own one, either as a hand-me-down or when we got new funding. We went through to most of the Sun-4 series and into the Solaris years, with each year bringing a new top-of-the-line machine for one of us to manage (these were multi-user machines of course). Our project was so prominent within the University that others followed our lead where they could and where they couldn't they looked on slightly enviously.
During that time we often spoke with various Sun engineers in the hardware and software groups, e.g., Jim Waldo. This was on things as varied as transactions (not surprising really) through to operating systems (we got one of the first drops of the Spring operating system) and distributed computing. Then when Sun released Oak and eventually Java it was a natural extension for us to get involved with that heavily.
It was in the early 1990's that things started to change though. We were using Linux from the first public release Linus made (I remember Stuart and I taking it home one evening to replace Minix that we'd been running on the Atari's). Then when the project bought us Pentium machines, Stuart made the switch from a Sparc 2 running Solaris. He and I ran various benchmarks against my Sparc and for most of the things we needed at the time (compiling, execution etc.) the Pentium/Linux combination was faster as well as being a lot cheaper! The rest, as they say, is history. Year on year the PC and Linux combo got faster and faster, laptops came into this too pretty early, and the project steadily moved away from everything Sun. By the end of the 1990's the last few Sun workstations that were still in use were several years old and in the minority.
From conversations with others I've had over the years I think this is a very similar pattern to elsewhere. Was there anything Sun could do about this? Maybe and I'm sure that question will be debated for years to come. But for now it is with sadness that I contemplate a world without Sun. They helped shape what I am today from hardware, operating system, programming language(s) and so much more. In that regard I will always be in their dept.