Tuesday, April 23, 2013

SRDC 2013

SRDC 2013

TransForm School on Research Directions in Distributed Computing

CALL FOR ABSTRACTS

June 10-14, 2013

Heraklion, Crete Island, Greece

http://www.ics.forth.gr/carv/transform/srdc/

************************************************************************

TransForm School on Research Directions in Distributed Computing aims at
the dissemination of advanced scientific knowledge in the general area
of distributed computing with emphasis on multi-core computing,
synchronization protocols, and transactional memory. A major goal of the school
is to explore new directions and approaches on hot topics of current research
in these areas (and more generally in distributed computing) and to promote
international contacts among scientists from academia and the industry.
Research work from all viewpoints, including theory, practice, and experimentation
can be presented at the school.

The school will include a series of talks by renowned researchers.
A list of the invited speakers is provided below (this list is expected
to be expanded in the future):

- Carole Delporte, Universite Paris Diderot - Paris 7 (France)
- Shlomi Dolev, Ben Gurion University (Israel)
- Hugues Fauconnier, Universite Paris Diderot - Paris 7 (France)
- Pascal Felber, University of Neuchatel (Switzerland)
- Rachid Guerraoui, EPFL (Switzerland)
- Maurice Herlihy, Brown University (USA)
- Anne-Marie Kermarrec, INRIA-Rennes Campus
- Petr Kuznetsov, TU Berlin/Deutsche Telekom Laboratories (Germany)
- Mark Little, Red Hat (UK)
- Maged Michael, IBM Thomas J. Watson Research Centre (USA)
- Alessia Milani, University of Bordeaux (France)
- Eliot Moss, University of Massachusetts (USA)
- Michel Raynal, University of Rennes I (France)
- Eric Ruppert, University of York (Canada)
- Nir Shavit, MIT Computer Science and Artificial Intelligence Laboratory (USA)
- Assaf Schuster, Technion (Israel)
- Corentin Travers, University of Bordeaux (France)
- Pawel T. Wojciechowski, PoznaƄ University of Technology (Poland)


Abstract Submissions
*************************
PhD students or young researchers interested in presenting their work
should submit a 1-page abstract motivating the main research challenge
they are addressing and stating the approach being taken. A selection
of proposals will be chosen for presentation. Eah such presentation
will be of 15 minutes.

To submit a talk, please send an email to faturu AT csd DOT uoc DOT gr
by April 26, 2013. The subject of this e-mail should be of the following form:
": proposal for SRDC talk".
Every submission should be in English, in .ps or .pdf format, and
include the title, the names of the presenter and his/her collaborators
in the research work of interest, their affiliations, and a one page abstract
of the work. Students should also provide the name and contact information
of their advisors.

Abstracts will become available to participants electronically. Authors
will be given the option to upload their presentation on the school’s website.

Financial Support
*********************
TransForm will provide financial support to a number of researchers/students.
Those who intend to apply for financial support should send a 1 page application
(in addition to their talk proposal) which will provide a short description of their
travel expenses. Applications should be sent to faturu AT ics DOT forth DOT gr by April 26, 2013.


Important dates
******************
Submission deadline: April 26, 2013
Notification: April 30, 2013
School Date: June 10-14, 2013

Monday, April 22, 2013

Is Cloud the death of open source?

Over the last few years I've been hearing from various quarters that Cloud (specifically PaaS) doesn't need or work well with open source. At least what some of these people mean is that business models that have worked well for non-PaaS open source don't necessarily work for PaaS. I think the jury is still out on that one. However, if you look around at PaaS implementations out there, or even further up and down the stack to include IaaS and SaaS, it's clear that open source is playing a major role. Whether it's OpenShift, OpenStack. MySQL, Linux or a plethora of other components, it's hard to find environments that aren't built on open source in one way or another. (Excluding closed source companies, of course!)

Now why do I mention this? Because I'm just back from JUDCon Brazil and this topic of conversation came up with some of the attendees. In fact they were suggesting that several of the most significant waves in software over the past few years and into the next few years, are fuelled by the innovation within disparate open source communities. When you look at cloud, mobile, ubiquitous computing etc. it's hard to disagree!

Sunday, March 31, 2013

A Raspberry Pi and vert.x related cross-posting

I've been making progress on another Pi-related project. Since it also involves transactions, I posted it on the JBossTS blog, but wanted to cross-post here for those who may not track that blog separately.

Sunday, March 17, 2013

Travelling is a PITA at times!

I spent pretty much all of the week before last in New York and Boston visiting many of our financial services customers. It's a great opportunity to hear what they like, what they don't like and work together to make things better. I'd do this much more often though if it didn't involve flying! It seems that every time I fly I end up getting ill; usually a cold or (man) flu. Unfortunately this time was no different and when I got home at the weekend I could feel something coming. Sure enough, it was a heavy cold and it laid me up for days (I'm still not recovered completely yet).

Then while I'm recovering I remember that I missed QCon London. I vaguely remember many months ago while planning that my trip conflicted with QCon, but it had slipped from my mind until last week. It's a shame, because I love the QCon events. However, what made this one worse was that I appear to have completely missed the fact that Barbara Liskov was presenting! It's been the best part of 20 years since I last saw Barbara, so it would have been good to hear her and try to catch up. Back in the 1980's I visited her group for a while due to the similarities between what they were doing around Argus, replication (both strong and gossip based) and transactions, and of course what we were doing in Arjuna. She was a great host and I remember that visit very fondly. Oh well, maybe in another 20 years we'll get a chance to meet up again!

Friday, February 22, 2013

Updating vert.x examples on the Pi

As I mentioned earlier, I've been repeating earlier experiments around running vert.x on a 256 Meg model B Raspberry Pi. I was working through all of the Java examples and didn't have time to go through them all again. But here's an update on the rest.

For a start, I should Fanout Server works, but again you may need to install a telnet client on your Pi, or run remotely. However, the HTTPS example failed (on Safar, Chrome and Firefox) and I remember this happened in December when I first did this work. I reported the problem to Tim and hopefully we'll get to the bottom of this eventually:



The Proxy example works too, but you need to remember that the Pi isn't an i7 multicore laptop, so just be patient:

PubSub is another example that often requires a little patience before trying to telnet, but the results are worth it:

The Upload example is relatively simple:

Route Match works too, but remember that the text in the example is wrong and the actual directory is route_match and not routematch!
Make sure you have your CLASSPATH set correctly before running the Resource Load example:

And I think that's it. As I mentioned before, if you have any problems reproducing any of this then let me know and I'm happy to try to help.









Thursday, February 21, 2013

More adventures in Pi land with vert.x

It seems that some people are seeing problems running vert.x on the Pi, so I decided to double check what I did over Christmas. I'm using the exact same configuration as the last time, so it's still the soft float implementation of Wheezy. If people still have difficulties duplicating what I'm seeing here then I'll try again with the hard float version.

So I worked my way through some of the vert.x Java examples only (I'll try the ones I missed at a later date). The thing to realise straight away is that if any of the examples tell you to point a browser, telnet or some other client at a running service then wait a while before doing so: the Pi isn't the speediest device on the block and sometimes it just takes that little bit longer to get going. Try running netstat in a different shell to see what ports have been created and bound as well:


Starting with EchoServer+Client (you may need to install a telnet client, or just connect from a different machine):


Through HTTP ...


And SendFile ...


SSL of course ...


... and a copy of top output to show the two instances running ...


It wouldn't be a good test of the capabilities of the Pi and vert.x without some WebSockets running, so here's what you should see upon success:

And finally Eventbus Bridge:


There were some failures. HTTPS for instance but I think this is a known problem and will investigate this further before posting anything. However, hopefully this is enough to be getting on with to show that it should be possible to run most (probably all) of the vert.x Java examples on a Raspberry Pi.



Tuesday, February 19, 2013

Move to e-books


Anyone who has known me long enough will know that I love books. Whether it's fiction or fact based, I love the physical medium that books provide. New books have a great smell and crispness about them. Old books often encapsulate their history within their very pages, whether in the form of turned pages, broken spines (something I hate!), stains or something else entirely. And if you return to an old book that you haven't read for years, these imperfections can often stir memories of times past, offering additional value than just reading the text (my copy of To Kill A Mockingbird is just such a book and each time I read it it's like seeing an old friend again.)

Over the years I've collected hundreds of books, ranging from fantasy, science fiction, classics and of course work related. And rarely have I discarded a book. So as I grew up it became harder and harder to get them all out of the boxes in which they often ended up. Then I met my wife who has just as much a passion for books as well as being a collector too. So something had to give.

Several years ago I converted an old HP Jornada to an ebook reader for her and she took to it. The convenience of the form factor, the ability to store hundreds of books on flash memory and the fact that she could get books instantly, sold it to her. I, however, remained unconvinced. The price difference between the physical copy and the electronic copy annoyed me and still does. And I still love the tactile aspect of a real book. Then my wife upgraded to a Sony reader with e-ink and she really fell in love with the format. Though she still buys physical copies of select books, the vast majority of the books she gets today are e-books.

Throughout this I have remained resolutely against moving. As I said, I love the old style format and don't think e-books are the same, no matter how good the technology gets. However, it is the reality of family life coupled with the masses of books we possess that is pushing me towards the electronic versions, at least in a limited way. I'm going to give it a go for selected books: those for which I won't necessarily build emotional ties. But I reserve the right to be disappointed in losing something in the transition and I may go back eventually. Finally, I find it interesting that given my background in computing and adoption of new technologies, I can't get over the hurdle of migrating to e-books. Maybe this is similar to the vinyl versus CD debate of two decades ago?

Monday, February 11, 2013

Nice posting about Colossus and Brian

Brian's been a permanent part of Newcastle University for as long as I can recall, so it's nice to see this write-up on some of the things he's done over the years.

Thursday, February 07, 2013

HPTS 2013 CfP


15th International Workshop on High Performance Transaction Systems (HPTS)
September 22-25, 2013
Asilomar Conference Grounds, Pacific Grove, CA
http://hpts.ws

Every two years, HPTS brings together a lively and opinionated group of participants to discuss and debate the pressing topics that affect today's systems and their design and implementation, especially where scalability is concerned. The workshop includes position paper presentations, panels, moderated discussions, and significant time for casual interaction. And of course beer.

Since its inception in 1985, HPTS has always been about large scale. Over the years the focus has shifted from scalable transaction processing to very large databases to cloud computing. Today, scalability is about big data. What interesting but out-of-the-spotlight big-data applications are out there? How are datacenter software and hardware abstractions evolving to support big data apps? How has big data changed the role of data stewardship‹not just data security, but data provenance and dealing with noisy data? How are big data apps affected by limitations in energy consumption? What advances have occurred in identifying patterns and even approximate schemas at petabyte scale? How have the provisioning of networking, storage and computing in datacenters had to shift to support these apps?

We ask potential participants to submit a brief technical summary or position, presenting a viewpoint on a controversial topic, a summary of lessons learned, experience with a large or unusual system, an innovative mechanism, an enormous problem looming on the horizon, or anything else that convinces the program committee that the participant has something interesting to say. The submission process is purposely lightweight, but we require each submission to have only a single author.

The workshop is by invitation only and is limited to under 100 participants. The submissions drive both the invitation process and the workshop agenda. Participants may be asked to be part of a presentation or discussion session at the workshop. Students are particularly encouraged to submit.

What to submit:
A 1 page position statement or extended abstract
Optional: the written submission can include a link to one or both of the following as an expanded part of the submission:
Maximum of 3 PowerPoint-type slides
 Maximum 2 minute video presentation ‹can be of you speaking with or without slides, a video demo or other video illustration of your proposed presentation, etc.
  Even if you choose NOT to submit these, the PC may decide to ask you for them later during consideration of submissions.
The length limits will be strictly observed. We won't consider too-long submissions.

How to submit:  Go to http://bit.ly/hpts2013submit

When to submit:  Now would be good. Official deadlines are:
Submission of Papers:  March 11, 2013
Notification of Acceptance:  May 24, 2013
HPTS Workshop:  September 22-25, 2013

Organizing committee:  Pat Helland, Salesforce; Pat Selinger, IBM (General Chair); Shel Finkelstein, SAP; Mark Little, Red Hat

Program committee
Anastasia Ailamaki, EPFL
David Cheriton, Arista Networks/Stanford
Adrian Cockcroft, Netflix
Bill Coughran, Sequoia Capital
Armando Fox, UC Berkeley (Chair)
Sergey Melnik, Google
Adam Messinger, Twitter
Margo Seltzer, Harvard
Wang-Chiew Tan, UC Santa Cruz

Poster session chair
Michael Armbrust, Google

Saturday, January 05, 2013

An update on Pi work

After a few interactions with people on the twitter-verse and writing the blog about building MongoDB for the Pi, I decided that it was probably a good thing to bundle up my distribution of MongoDB for the Raspberry Pi and make it publicly available. So if anyone wants to get it, it's available on github at https://github.com/nmcl/mongo4pi. Enjoy and let me know if you have any issues with it.

Monday, December 31, 2012

Adventures in Pi Land Part Two

In the first part of this blog we looked at the initial setup of the Raspberry Pi. In this one we'll look first at building and running Fuse Fabric, followed by vert.x. So first Fabric and before I go on it's worth mentioning that this is still not complete and I need to check with the team in the new year to figure get more details on the problems, which could very well be due to my setup on the Pi.

Initially we had 256 Meg of swap and with maven2 installed the Fabric build docs tell us to use the m2 profile option (and the right MAVEN_OPTS). Unfortunately the initial build attempt failed with maven2, so I installed maven 3.0.4 which gets us a little further:


[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12:test (default-test) on project archetype-builder: Error while executing forked tests.; nested exception is java.io.IOException: Cannot run program "/bin/sh" (in directory "/home/pi/fusesource/fuse/tooling/archetype-builder"): java.io.IOException: error=12, Cannot allocate memory -> [Help 1]

So we increase swap to 1024 Meg by again editing the /etc/dphys-swapfile as mentioned before. This helped, but even with these options, the Pi crashed after about 6 hours of building. (Yes, the Pi crashed!) And as you can see from the screen shot, prior to this Java was taking up a lot of the processor time.


And if not Java, then writing to swap:


After playing with more configuration parameters (with failures 6+ hours later) and ordering a faster SD card (which has not turned up at the time of writing), I decided to try the the latest official binary of Fabric from the download site (fuse-fabric-7.0.1.fuse-084) - no point struggling to get it to build if it won't run anyway. Working through the Getting Started tutorial did not produce the expected output, which needs investigation once I get back to work and can talk with the team. For instance, when trying to use camel ...


And over 7 minutes later the same command gave:


And running top showed the processes were fairly idle:


I spent a few more hours trying other options, but nothing really seemed to make a difference or indicate whether the issues were with the Pi or elsewhere. But I still consider this a success, since I learnt a few things more about the capabilities and limitations of the Pi that I hadn't before, as well as about Fabric (which was the original project I had set myself). And I will return to this and see it through the completion soon.

So onwards to vert.x. Initially I decided not to build the latest source, but to just see if the official binary would run. Now vert.x requires JDK 7, so we need to download and that from Oracle. Next, downloading and installing vert.x-1.3.0.final and following the installation guide brings us to:

Success! It took over a minute for the server.js example to initialise (since I'm connecting remotely to the Pi, I changed localhost to the actual IP address). But HelloWorld eventually worked.

We're on a roll here! Next we moved on to the JavaScript Web Application Tutorial - I want to test as much of vert.x as possible and implicitly the Raspberry Pi's boundaries. This tutorial needs MongoDB. Unfortunately you can't just install the Linux distribution version as it's not suitable for the Pi. I looked around to see if there was a build I could get elsewhere and found one, but unfortunately that needs the hardware floating point version of Wheezy, which we can't use if we are using Java 6 or 7.

Someone said that you can build an older version of MongoDB from source which I tried. To do this I made sure we were still working with 1024 Meg of swap. But unfortunately after about 10 hours, the build failed:

Fortunately a little digging around found a couple of other options and I went with the first, i.e., http://mongopi.wordpress.com/2012/11/25/installation/. Still with 1024 Meg of swap, after 12 hours of building and similarly for installation, I ended up with a running version of MongoDB on the Raspberry Pi!


I was then able to progress through the example, but still had a few failures due to incorrect module versions mentioned in the walkthrough. Thanks to Tim for giving the right numbers and he's going to update the pages asap. There are still a few niggles with the example failing during authentication, but I can live with them for now and will get to the bottom of them eventually.

But what about building vert.x from scratch on the Pi? Well that was a lot easier than I had thought given everything else that's happened so far. Building vert.x head from source and getting Jython 2.5.2,  we initially get an error:

But if you use the nightly build of gradle:


Which eventually results in ...


Success again!

So what does all of this mean? Well apart from the obvious areas where I still need to see success, I think it's been a great exercise in learning more about the Pi. I've also learnt a lot more about the various projects I wanted to play with this festive season. I'm happy, even if it did take quite a few more days to do than I expected originally!

Adventures in Pi Land Part One

Over this Christmas vacation I set myself a number of pet projects that I'd normally not have time to do during the rest of the year. Over the last 6 months or so I've been playing with the Raspberry Pi, but not really pushing it a lot - more a case of playing around with it. So I decided that I'd try and make all of my projects over Christmas relate to the Pi in one way or another. This blog and the follow up, will relate what happened.

OK, so before we really get going it's worth looking at the Pi setup. In keeping with its background, setting up the Pi is pretty simple and you can find details in a number of places including the official Pi site. But I'll include my configuration here for completeness. First, I've been using one of the original Model B instances, i.e., one with 256 Meg of memory and not the newly updated version with 512 Meg. As a result, if you've got a newer version then you may be able to tweak a few settings, such as the swap space.

Because I'm playing with JDK 6 and 7, I used the soft-float variant of Wheezy. After burning that to an SD card, remember to use rasp-config to get back the entire disk space, or you'll find an 8Gig SD card only appears to have a few hundred Meg free! And don't forget to use the right kind of SD card - faster is better. I run my Pi headless (no free monitor or keyboard these days), so initially I had it connected to my router via an ethernet cable and then immediately configured wifi. How you do this will depend upon the wifi adapter you use, but I'm happy with the Edimax EW-7811Un and you can get information about how to update the kernel with the right driver from a number of places.

Once wifi was up and going, I changed swap size for the Pi. In the past this wasn't an issue, but then I hadn't been about to build Arjuna, Fuse, vert.x and MongoDb! You can modify swap by editing /etc/dphys-swapfile and then running /etc/init.d/dphys-swapfile stop followed by /etc/init.d/dphys-swapfile start. Initially I started off with 256 Meg of swap, but as you'll see later, this wasn't always sufficient! Finally let's start by adding openjdk 6 (sudo apt-get install openjdk-6-jre openjdk-6-jdk) followed by git and maven (sudo apt-get install maven2 git).

So this brings us to a base from which we can proceed with the real projects. The first one, which was building Arjuna/JBossTS/Narayana, was pretty straightforward compared to the others and has been documented elsewhere. Which means in the next instalment we'll look Fuse Fabric, vert.x and because of that project, MongoDB.

Wednesday, December 26, 2012

Thunderbirds are go!

Over the year a number of people who were influential in my life have died. So it's with great sadness that I just heard that Gerry Andersen passed away today! Growing up in the later 60's and early 70's, programs such as Space 1999, Thunderbirds, Joe 90, Stingray and Captain Scarlet were the height of TV watching for kids. Each episode was an event, probably on the same scale as blockbuster films are today for kids. Having two children growing up over the past 2 decades, I can say that I've had to watch quite a lot of children's TV in the 90's and beyond, and yet none of them seem to have been as influential on my kids or their friends as Gerry Andersen's various efforts had on us.

Some kids watching them today may laugh at the special effects or the "wooden" acting, but I think that's due to the expectations that films like Lord of the Rings or Avatar have set. But relatively speaking, Gerry was the king of his era and those programs will live on in the memories of many people, myself included. It's truly a sad day. And I can't think of a more fitting way to say thank you and honour that memory than to watch my boxset of Stingray!

Monday, December 24, 2012

A busy busy year

I'm on holiday and for the first time in about 6 months I have some time to reflect on the past year. And one word sums it up: busy. One way or another work has been tying me up, flying me around, or just consuming my energy. Fortunately I love my work, or I'm sure I'd have suffered a lot more stress than I have. Most of the time work presents problems that I have to solve, and throughout my life I've loved problem solving! And the people I work with are great and friendly too, which helps a lot.

Now what got me to reflecting on the past year was simply when I took a look at what I've blogged about here over the last 12 months. It's not a lot, compared to previous years and yet I had thought it was much more. However, when you take into account the blogs I write on JBoss.org and other articles I write, such as for InfoQ, it starts to make sense. And of course there's twitter: despite my initial reservations about using it, I think this year has been the one where I've really put a lot more effort into being there and tracking what others are saying and doing. I believe there's a correlation between the amount I've tweeted and the reducing in the blogging I've done, at least here.

So what about 2013? Well I've deliberately tried not to think that far ahead, but I already know it promises to be as busy as 2012. Bring it on!

Sunday, December 09, 2012

Farewell Kohei

I was shocked earlier this week when I found out that Kohei Honda passed away suddenly. I've known Kohei personally for a few years but longer in terms of his influence on the field of distributed systems. The work that we've been doing around Savara, WS-Choreography, Pi Calculus and beyond, has been better for his involvement. We were talking about new bodies of collaborative work only in the last few weeks, which makes his death even more shocking and personal. He will be missed and my best wishes go out to his family and his other friends and colleagues.

Farewell Sir Patrick Moore

I haven't blogged for a while because I haven't really had time or the inclination to do so. However, when I saw that Sir Patrick Moore has just died I couldn't help put fingers to keyboard! As a child growing up in the 70's, The Sky At Night was a wonderful program to follow. Not only was this the Space Age, but science fiction was rich and influential as a result, as well as the fact that in the UK we had just 2 TV channels to choose from (both of which started in the morning and ended in the late evenings). Way before Star Wars, Close Encounters etc. this small program hidden away at night on the BBC was my view screen into the much larger universe beyond my four walled world. And Patrick Moore was the epitome of the scientists who were pushing back the curtain to reveal the secrets beyond.

To say that The Sky At Night and it's presenter influenced me would be a bit like saying that the air around us influences us: both were pivotal when I was in my most formative years and I know many people who were similarly impacted by them. So it is with great sadness that I heard of his death; my condolences go out to his family and friends. And maybe tonight I'll get my telescope out and think of him and that time over 35 years ago when he captured my attention and helped shape me in some small way. Thank you Sir Patrick, you will be sorely missed!

Sunday, October 28, 2012

Cloud and Shannon's Limit

I've been on the road (or air) so much over the past few months that some things I had thought I'd blogged about turn out to be either dreams or only to have hit twitter. One of them is Shannon's Limit and its impact on the Cloud, which I've been discussing in presentations for about 18 months or so. There's a lot of information out there on Shannon's Limit, but it's something I came across in the mid 1980's as part of my physics undergraduate degree. Unfortunately the book I learned from is no longer published so apart from a couple of texts that are accessible via Google I can't really recommend all of them (they may be good, but I simply don't have the context to say that with certainty). However, if you're looking for a very simple, yet accurate, discussion of what Shannon's Limit says, it can be found here.

So what has this got to do with the Cloud? In the context of the Cloud then put simply, Shannon's Limit shows that the Cloud (public or private) only really works well today because not everyone is using it. Bandwidth and capacity are limited by the properties of the media we use to communicate between clients and services, no matter where those services reside. But for cloud, the limitation is the physical interconnects over which we try to route our interactions and data. Unfortunately no matter how quickly your cloud provider can improve their back end equipment, the network to and from those cloud servers will rarely change or improve, and if it does it will happen at comparatively glacial speeds.

What this means is that for the cloud to continue to work and grow with the increasing number of people who want to use it, we need to have more intelligence in the intervening connections between (and including) the client and service (or peers). This includes not just gateways and routers, but probably more importantly mobile devices. Many people are now using mobile hardware (phones, pads etc.) to connect to cloud services so adding intelligence there makes a lot of sense.

Mobile also has another role to play in the evolution of the cloud. As I've said before, and presented elsewhere, ubiquitous computing is a reality today. I remember back in 2000 when we (HP) and IBM were talking about it, but back then we were too early. Today there are billions of processors, hundreds of millions of pads, 6 billion phones etc. Most of these devices are networked. Most of them are more powerful than machines we used a decade ago for developing software or running critical services. And many of them are idle most of the time! It is this group of processors that is the true cloud and needs to be encompassed within anything we do in the future around "cloud".

Friday, October 26, 2012

NoSQL and transactions


I've been thinking about ACID and non-ACID transactions for a number of years. I've spent almost as long working in the industry and standards trying to evolve them to cater for environments where strict ACID transactions are too much. Throughout all of this I've been convinced that transactions are the right abstraction for many of the fault tolerance, reliability and consistency requirements. Over the years transactions have received bad press in some quarters, sometimes from people who don't understand them, over use them, or don't really want to have to implement them. At times various waves of technology have either helped or hindered the adoption of transactions outside of the traditional database; for instance some NoSQL efforts eschew transactions entirely (ACID and extended) citing CAP when it's not always right to do so.

I think a good transactions implementation should be at the core of all middleware platforms and databases, because if it's well thought out then it won't add overhead when it's not needed and yet provides obvious benefits when it is. It should be able to offer a wide range of transaction models (well at least more than one) and a model that makes it easier to reason about the correctness and consistency of applications and services developed with it.

At the moment most NoSQL or BigData solutions either ignore transactions or support ACID or limited ACID (only in the scope of a single instance). But it's nice to see a change occurring, such as seen with Google's Spanner work. And as they say in the paper: "We believe it  is better to have application programmers deal with performance problems due to over use of transactions as bottlenecks arise, rather than always coding around the lack of transactions."

And whilst I agree with my long time friend, colleague and co-author on RDBMS versus the efficacy of new approaches, I don't think transactions are to be confined to the history books or traditional back-end data stores. There's more research and development that needs to happen, but transactions (ACID and extended) should form a core component within this new infrastructure. Preconceived notions based on overuse or misunderstanding of transactions shouldn't disuade their use in the future if it really makes sense - which I obviously think it does.

Wednesday, September 19, 2012

Travel woes

I've been doing a lot of international travel in the last few weeks, with more to come. It can be annoying at times on flights, what with the people who knock you with their bags if you're sat on the aisle; passengers who put so much stuff under their seats that it encroaches on your leg space; those who recline their seats when you're trying to eat; then there are the passengers who bring suitcases on board big enough to live in (people, it's not carry-on if you have to wheel it in or need help picking it up!); or those kids who cry all flight and kick the back of your seat.

But the passengers who annoy me the most are those idiots who throw bags into the overhead lockers and rely on the door to keep things in! Then when someone else opens it, guess who the bags land on?! And when the guilty party simply states "Oh, I didn't realise", it really doesn't help! Look, if you didn't realise then you really should go back to school and learn about gravity! The next person who does that is likely to get more than harsh words from me.

Tuesday, September 04, 2012

Coming or going?

I've been dreading September because it represents the busiest travel schedule that I can recall having for many years. After this week it seems that I am away from the country every week for the next 7 weeks, stopping back in the UK to make sure my family remember what I look like! I'm at JavaOne, StrangeLoop, a JBUG, a Cloud-TM meeting, new hire orientation for our Fuse acquisition, a Customer Advisory Board meeting and a Quarterly Business Review. On average I think I'll have 2 days a week at home and will see the inside of planes more than the inside of my home. Ouch!