I tried What Video Game Character Are You? and here's the result:
Not sure I'd agree with all of that - I can't remember the last time I got lost inside a building ;-)
Wednesday, June 29, 2005
Tuesday, June 28, 2005
Grid BOF at JavaOne
I'm here at JavaOne sitting on a panel for a BOF on the Grid; Building Tomorrow's Grid. Well that was last night (9:30pm, but it was surprisingly well attended, with standing room only) and I think it went very well. There was a lot of audience participation and it definitely seemed like there was a lot of interest in grid (small 'g') computing. Overall the panel pretty much agreed with one another; the main exception was when it came to the subject of Java's role in the grid (small 'g' again): the panel split down the middle, with Richard Nicholson and Dan Hushon saying the JINI is the way forward, whilst Greg and I disagreeing. As Greg pointed out, it's unrealistic to assume that the world will be purely Java, and the bridging/wrapping approach to embedding non-Java services into Jini seems like a hack if the real answer is to simply use the right tool (aka language) for the right job.
During our individual presentations at the start of the session, we were asked to answer the following questions and I think my answers were similar to those of the rest of the panel:
(i) What is grid computing? I made the distinction between grid (small 'g') and Grid/GRID computing. I think grid has been around for many years and people simply haven't collected all of these massively parallel and distributed applications under a single categorisation. Take a look at SETI@home for example. I remember installing this when it first came out back in the early 1990's. The statistics for it today are astonishing: 3 million computers, 14 Teraflops average, 500000 years of processing power in 18 months - the equivalent of several supercomputers. It's got to be one of the most successful grids around. GRID on the other hand, is IMO an effort to try to standardise on practices, patterns and infrastructure in building grids: a great idea when you consider the number of grid toolkits that are around.
(ii) What problem does it solve? Pretty simple - not many organisations can afford supercomputers, but there are a lot of massively parallel applications out there and many computers that simply aren't used most of the time. (One member of the audience came up to me afterwards and said that his company is thinking about building a grid to use the power of 14000 machines that they've got, which are idle 40% of the time.) Using someone elses resources to do your work seems like a good idea - it's cost effective for a start!
(iii) Where is it on the hype scale? grid (small 'g') has been around for many years and most certainly isn't hype. Compared to that, GRID is more hype than reality but that's just a timing thing.
Overall I think it was a great BOF and I was pleasantly surprised to see how many people turned out. Now it's off for a book signing and definitely some session tracks.
During our individual presentations at the start of the session, we were asked to answer the following questions and I think my answers were similar to those of the rest of the panel:
(i) What is grid computing? I made the distinction between grid (small 'g') and Grid/GRID computing. I think grid has been around for many years and people simply haven't collected all of these massively parallel and distributed applications under a single categorisation. Take a look at SETI@home for example. I remember installing this when it first came out back in the early 1990's. The statistics for it today are astonishing: 3 million computers, 14 Teraflops average, 500000 years of processing power in 18 months - the equivalent of several supercomputers. It's got to be one of the most successful grids around. GRID on the other hand, is IMO an effort to try to standardise on practices, patterns and infrastructure in building grids: a great idea when you consider the number of grid toolkits that are around.
(ii) What problem does it solve? Pretty simple - not many organisations can afford supercomputers, but there are a lot of massively parallel applications out there and many computers that simply aren't used most of the time. (One member of the audience came up to me afterwards and said that his company is thinking about building a grid to use the power of 14000 machines that they've got, which are idle 40% of the time.) Using someone elses resources to do your work seems like a good idea - it's cost effective for a start!
(iii) Where is it on the hype scale? grid (small 'g') has been around for many years and most certainly isn't hype. Compared to that, GRID is more hype than reality but that's just a timing thing.
Overall I think it was a great BOF and I was pleasantly surprised to see how many people turned out. Now it's off for a book signing and definitely some session tracks.
Friday, June 24, 2005
SOAP: slow or fast?
There's an interesting discussion going on here between Michi and Jim about the performance of SOAP. I wasn't going to get involved in what could easily become a "PC verus Mac, Unix versus Windows" debate, but I'll add my 2 cents worth.
I agree with them both, to a point.
I'm sure ICE is fast (I too haven't used it). I do know that CORBA implementations these days are very fast too and message service implementations (for example) like AMS are built for speed. When I started out doing my PhD back in the mid 1980's, my first task was to help write and improve Rajdoot, one of the first RPC mechanisms around. Even then, when a fast network was 10 mbps (if you were lucky) and we used Whitechapels or Sun 360's, we could regularly get round trips on 5ms for messages up to 1K in size (packet fragmentation/re-assembly happens above this, so this was the maximum critical packet size). Not fast by today's standards, but fast back then. Having been working with SOAP and Web Services for 5 years now, I know it's slow compared to what we had in 1986, so it simply doesn't compare to what's possible these days. So, I agree with Michi's point that on that (yes, we have tried compression over the years too, and got the same results as Michi - it works, but you've got to use it carefully.)
However, and this is where I also agree with Jim, SOAP performance can be improved. The sorts of things that go on under the covers today in terms of XML parsing, for example, are pretty inefficient. Next time you want to see, just pop up something like OptimizeIt and watch what happens. I'm pretty confident that developers can and will improve on this. As an analogy, back when IONA released the first version of Orbix it was the market leader but its performance was terrible compared to later revisions. (Opcodes were shipped as strings, for a start!) I'm not singling out IONA - this is a pattern that many other ORB providers followed. So, I agree with Jim: SOAP doesn't have to be this slow - it can be improved.
But this is where I stop agreeing and come back to the fact that it's beginning to sound like the "PC verus Mac, Unix versus Windows" debates of old. You're not comparing like with like.
This is definitely a case of using the right tool for the right job, combined with some unfortunate commercial realities. If you want interoperablity with other vendors (eventually pretty much any other vendor on the planet), then you'd go the SOAP route: there is no logical argument to the contrary. CORBA didn't get mass adoption, DCE failed before it, and despite Microsoft's power, so did DCOM. Eric has some interesting things to say on the subject here, but the reason SOAP works well is because of XML, HTTP (IMO) and pretty much universal adoption. I can't see that changing. In the forseeable future, I can't see the likes of Microsoft, IBM, Oracle, BEA etc. agreeing on a single protocol and infrastructure as they have with SOAP. To be honest, I think they were forced into the current situation because of the mass take-up of the original Web: they like vendor lock-in and had managed to maintain it for decades prior to Tim's arrival on the scene.
But you pay a heavy price for this kind of interoperability. There are inherent performance problems in SOAP that I just can't see going away. We may be able to chip at the surface and perhaps even make bit dents, but fundamentally I'm confident that SOAP performance versus something like ICE (or CORBA) will always be a one-sided contest. However, a contest of interoperability will be just as one-sided, with SOAP winning. From the moment I got into Web Services, I've said that I can't see it (and SOAP) replacing distributed environments like CORBA everywhere. It frustrates me at times when I see clients trying to do just that though and then complaining that the results aren't fast enough! If I want to go off-road, I'll buy a Land Rover; but if I want speed, give me a Ferrari any day! Distributed systems such as CORBA have been heavily optimised over the years and use binary encodings as much as possible - with the resultant impact on interoperability and performance. But that is fine. That's what they're intended for. Certainly if I was interested in high performance, I wouldn't be looking at SOAP or Web Services, but at CORBA (or something similar).
So in summary: of course there will be performance improvements for the SOAP infrastructure. There may even be a slow evolution to a pure binary, extremely efficient distributed invocation mechanism that looks similar to those systems that have gone before. But it's not strictly necessary and I don't see it happening as a priority. Use SOAP for interoperability. It lowers the integration barrier. But if you are really interested in performance and/or can impose a single solution on your corporate infrastructure, you may be better off looking elsewhere, to something like CORBA, or maybe even ICE.
I agree with them both, to a point.
I'm sure ICE is fast (I too haven't used it). I do know that CORBA implementations these days are very fast too and message service implementations (for example) like AMS are built for speed. When I started out doing my PhD back in the mid 1980's, my first task was to help write and improve Rajdoot, one of the first RPC mechanisms around. Even then, when a fast network was 10 mbps (if you were lucky) and we used Whitechapels or Sun 360's, we could regularly get round trips on 5ms for messages up to 1K in size (packet fragmentation/re-assembly happens above this, so this was the maximum critical packet size). Not fast by today's standards, but fast back then. Having been working with SOAP and Web Services for 5 years now, I know it's slow compared to what we had in 1986, so it simply doesn't compare to what's possible these days. So, I agree with Michi's point that on that (yes, we have tried compression over the years too, and got the same results as Michi - it works, but you've got to use it carefully.)
However, and this is where I also agree with Jim, SOAP performance can be improved. The sorts of things that go on under the covers today in terms of XML parsing, for example, are pretty inefficient. Next time you want to see, just pop up something like OptimizeIt and watch what happens. I'm pretty confident that developers can and will improve on this. As an analogy, back when IONA released the first version of Orbix it was the market leader but its performance was terrible compared to later revisions. (Opcodes were shipped as strings, for a start!) I'm not singling out IONA - this is a pattern that many other ORB providers followed. So, I agree with Jim: SOAP doesn't have to be this slow - it can be improved.
But this is where I stop agreeing and come back to the fact that it's beginning to sound like the "PC verus Mac, Unix versus Windows" debates of old. You're not comparing like with like.
This is definitely a case of using the right tool for the right job, combined with some unfortunate commercial realities. If you want interoperablity with other vendors (eventually pretty much any other vendor on the planet), then you'd go the SOAP route: there is no logical argument to the contrary. CORBA didn't get mass adoption, DCE failed before it, and despite Microsoft's power, so did DCOM. Eric has some interesting things to say on the subject here, but the reason SOAP works well is because of XML, HTTP (IMO) and pretty much universal adoption. I can't see that changing. In the forseeable future, I can't see the likes of Microsoft, IBM, Oracle, BEA etc. agreeing on a single protocol and infrastructure as they have with SOAP. To be honest, I think they were forced into the current situation because of the mass take-up of the original Web: they like vendor lock-in and had managed to maintain it for decades prior to Tim's arrival on the scene.
But you pay a heavy price for this kind of interoperability. There are inherent performance problems in SOAP that I just can't see going away. We may be able to chip at the surface and perhaps even make bit dents, but fundamentally I'm confident that SOAP performance versus something like ICE (or CORBA) will always be a one-sided contest. However, a contest of interoperability will be just as one-sided, with SOAP winning. From the moment I got into Web Services, I've said that I can't see it (and SOAP) replacing distributed environments like CORBA everywhere. It frustrates me at times when I see clients trying to do just that though and then complaining that the results aren't fast enough! If I want to go off-road, I'll buy a Land Rover; but if I want speed, give me a Ferrari any day! Distributed systems such as CORBA have been heavily optimised over the years and use binary encodings as much as possible - with the resultant impact on interoperability and performance. But that is fine. That's what they're intended for. Certainly if I was interested in high performance, I wouldn't be looking at SOAP or Web Services, but at CORBA (or something similar).
So in summary: of course there will be performance improvements for the SOAP infrastructure. There may even be a slow evolution to a pure binary, extremely efficient distributed invocation mechanism that looks similar to those systems that have gone before. But it's not strictly necessary and I don't see it happening as a priority. Use SOAP for interoperability. It lowers the integration barrier. But if you are really interested in performance and/or can impose a single solution on your corporate infrastructure, you may be better off looking elsewhere, to something like CORBA, or maybe even ICE.
Savas on the move
It's official now: Savas is on the move to Microsoft and Don Box's team. Savas and I have talked about his moving from Paul's group for quite a while, so it's fair to say that it's not a surprise. I'm doubly happy for him and sad - where do I find a coffee buddy now!
Savas and I have known each other for many years as friends and colleagues: while at HP, he was in my transactions team and was a star! I'm absolutely sure that Savas will make the most of this opportunity, and it's a good move for him and Microsoft. Though I still think the other job was just as good Savas ;-)
Anyway, good luck my friend and at the very least we'll be able to catch up at HPTS.
Savas and I have known each other for many years as friends and colleagues: while at HP, he was in my transactions team and was a star! I'm absolutely sure that Savas will make the most of this opportunity, and it's a good move for him and Microsoft. Though I still think the other job was just as good Savas ;-)
Anyway, good luck my friend and at the very least we'll be able to catch up at HPTS.
Wednesday, June 22, 2005
EPR rules of engagement
William, an ex-HP colleague, has some interesting things to say about EPRs and how people tackle them. I'm broadly in agreement with what he has to say; he even references our paper, which is nice. Unfortunately there are a couple of places where he's wrong:
(i) on the implication that WS-Coordination is the same as WS-Context. However, I think Greg has responded to that here (though since his blog seems down I can't check).
(ii) on the implication that this formal objection we've raised is somehow against EPRs and ReferenceParameters. It isn't (though that's not to say I don't believe the latter is inherently wrong, but that is a completely different issue). As I say here this is about keeping EPRs symmetrical. To summarise, the current state of affairs is that EPRs are encapsulated entities until you need to use them, and in what case their constituent elements (e.g., the endpoint URI, ReferenceParameters etc.) appear as first-class elements in the SOAP header). As the objection describes, this can lead to a number of problems and we believe it's worth fixing now rather than try to retro-fit something later.
Hopefully given the pitfalls that William's original blog entry describes, he'll agree that this change we're after only makes things better.
(i) on the implication that WS-Coordination is the same as WS-Context. However, I think Greg has responded to that here (though since his blog seems down I can't check).
(ii) on the implication that this formal objection we've raised is somehow against EPRs and ReferenceParameters. It isn't (though that's not to say I don't believe the latter is inherently wrong, but that is a completely different issue). As I say here this is about keeping EPRs symmetrical. To summarise, the current state of affairs is that EPRs are encapsulated entities until you need to use them, and in what case their constituent elements (e.g., the endpoint URI, ReferenceParameters etc.) appear as first-class elements in the SOAP header). As the objection describes, this can lead to a number of problems and we believe it's worth fixing now rather than try to retro-fit something later.
Hopefully given the pitfalls that William's original blog entry describes, he'll agree that this change we're after only makes things better.
Sunday, June 19, 2005
HPTS 2005
I submitted 3 papers to this year's High Performance Transaction Systems workshop and have been invited to attend. The bi-annual HPTS workshop is probably my favourite workshop/conference, so I'm pleased to be accepted for the fifth successive time. The papers aren't up on the web site yet, but in summary:
(i) a paper on WS-CAF that I co-authored with Eric and Greg.
(ii) a paper on ArjunaCore that I co-authored with Santosh.
(iii) a paper on the different conceptual models of Web Services transactions: does the one-size fits all approach really work and if not, why not.
Once the papers are on some web site, I'll update the blog.
(i) a paper on WS-CAF that I co-authored with Eric and Greg.
(ii) a paper on ArjunaCore that I co-authored with Santosh.
(iii) a paper on the different conceptual models of Web Services transactions: does the one-size fits all approach really work and if not, why not.
Once the papers are on some web site, I'll update the blog.
Tuesday, June 14, 2005
A musical interlude
At last year's WS-CAF f2f in New Orleans, Eric started to tell me about Professor Longhair and even gave me a few tracks to listen to. Wonderful music and it brings back good memories of New Orleans.
Thursday, June 02, 2005
End of the Grid TX road?
There's been an effort going on at the GGF for a while now into transaction management for Grid applications. The initial idea was to clearly define the requirements space for transactions in Grid (are they different from Web Services, for example), then to take an objective look at efforts that have been going on elsewhere (e.g., Web Services) and finally to say whether or not those efforts are suitable. Ultimately, if the existing work wasn't deemed suitable for Grid transactions, the group would define a/some new transaction protocol(s); I think this last bit was always potentially the work on something that would come after us, rather than for this group.
Well, we're finally coming to the end of the road on this phase of the work. It looks like in the next few weeks we'll have a good report on everything up to, and including, recommendations for any future effort in this space by the GGF. It doesn't look like we'll be defining any new protocol(s) afterall, but if the GGF decide that some are needed after reading the report, hopefully it will be useful input to the next phase.
Well, we're finally coming to the end of the road on this phase of the work. It looks like in the next few weeks we'll have a good report on everything up to, and including, recommendations for any future effort in this space by the GGF. It doesn't look like we'll be defining any new protocol(s) afterall, but if the GGF decide that some are needed after reading the report, hopefully it will be useful input to the next phase.
Chapter in MIT book on SOC
About 2 years ago I wrote a paper for a special issue of the CACM on Web Services and transactions. A little later, I was asked by the guest editors to write an extended version for an MIT Press book on service-oriented computing. It's taken a little longer than the editors or I expected, but finally it looks like the book is coming to fruition. I got some good feedback on the chapter (which tried to cover all of the efforts in this space in an objective manner) a few weeks ago and now need to finalise the work by the end of the month.
One thing this did bring to mind is precisely how long I've been working in this area (and transactions in general). Not sure if that's a good thing or a bad thing.
One thing this did bring to mind is precisely how long I've been working in this area (and transactions in general). Not sure if that's a good thing or a bad thing.
Subscribe to:
Posts (Atom)