Saturday, January 05, 2013
An update on Pi work
Monday, December 31, 2012
Adventures in Pi Land Part Two
Initially we had 256 Meg of swap and with maven2 installed the Fabric build docs tell us to use the m2 profile option (and the right MAVEN_OPTS). Unfortunately the initial build attempt failed with maven2, so I installed maven 3.0.4 which gets us a little further:
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12:test (default-test) on project archetype-builder: Error while executing forked tests.; nested exception is java.io.IOException: Cannot run program "/bin/sh" (in directory "/home/pi/fusesource/fuse/tooling/archetype-builder"): java.io.IOException: error=12, Cannot allocate memory -> [Help 1]
Adventures in Pi Land Part One
OK, so before we really get going it's worth looking at the Pi setup. In keeping with its background, setting up the Pi is pretty simple and you can find details in a number of places including the official Pi site. But I'll include my configuration here for completeness. First, I've been using one of the original Model B instances, i.e., one with 256 Meg of memory and not the newly updated version with 512 Meg. As a result, if you've got a newer version then you may be able to tweak a few settings, such as the swap space.
Because I'm playing with JDK 6 and 7, I used the soft-float variant of Wheezy. After burning that to an SD card, remember to use rasp-config to get back the entire disk space, or you'll find an 8Gig SD card only appears to have a few hundred Meg free! And don't forget to use the right kind of SD card - faster is better. I run my Pi headless (no free monitor or keyboard these days), so initially I had it connected to my router via an ethernet cable and then immediately configured wifi. How you do this will depend upon the wifi adapter you use, but I'm happy with the Edimax EW-7811Un and you can get information about how to update the kernel with the right driver from a number of places.
Once wifi was up and going, I changed swap size for the Pi. In the past this wasn't an issue, but then I hadn't been about to build Arjuna, Fuse, vert.x and MongoDb! You can modify swap by editing /etc/dphys-swapfile and then running /etc/init.d/dphys-swapfile stop followed by /etc/init.d/dphys-swapfile start. Initially I started off with 256 Meg of swap, but as you'll see later, this wasn't always sufficient! Finally let's start by adding openjdk 6 (sudo apt-get install openjdk-6-jre openjdk-6-jdk) followed by git and maven (sudo apt-get install maven2 git).
So this brings us to a base from which we can proceed with the real projects. The first one, which was building Arjuna/JBossTS/Narayana, was pretty straightforward compared to the others and has been documented elsewhere. Which means in the next instalment we'll look Fuse Fabric, vert.x and because of that project, MongoDB.
Wednesday, December 26, 2012
Thunderbirds are go!
Some kids watching them today may laugh at the special effects or the "wooden" acting, but I think that's due to the expectations that films like Lord of the Rings or Avatar have set. But relatively speaking, Gerry was the king of his era and those programs will live on in the memories of many people, myself included. It's truly a sad day. And I can't think of a more fitting way to say thank you and honour that memory than to watch my boxset of Stingray!
Monday, December 24, 2012
A busy busy year
Now what got me to reflecting on the past year was simply when I took a look at what I've blogged about here over the last 12 months. It's not a lot, compared to previous years and yet I had thought it was much more. However, when you take into account the blogs I write on JBoss.org and other articles I write, such as for InfoQ, it starts to make sense. And of course there's twitter: despite my initial reservations about using it, I think this year has been the one where I've really put a lot more effort into being there and tracking what others are saying and doing. I believe there's a correlation between the amount I've tweeted and the reducing in the blogging I've done, at least here.
So what about 2013? Well I've deliberately tried not to think that far ahead, but I already know it promises to be as busy as 2012. Bring it on!
Sunday, December 09, 2012
Farewell Kohei
Farewell Sir Patrick Moore
To say that The Sky At Night and it's presenter influenced me would be a bit like saying that the air around us influences us: both were pivotal when I was in my most formative years and I know many people who were similarly impacted by them. So it is with great sadness that I heard of his death; my condolences go out to his family and friends. And maybe tonight I'll get my telescope out and think of him and that time over 35 years ago when he captured my attention and helped shape me in some small way. Thank you Sir Patrick, you will be sorely missed!
Sunday, October 28, 2012
Cloud and Shannon's Limit
So what has this got to do with the Cloud? In the context of the Cloud then put simply, Shannon's Limit shows that the Cloud (public or private) only really works well today because not everyone is using it. Bandwidth and capacity are limited by the properties of the media we use to communicate between clients and services, no matter where those services reside. But for cloud, the limitation is the physical interconnects over which we try to route our interactions and data. Unfortunately no matter how quickly your cloud provider can improve their back end equipment, the network to and from those cloud servers will rarely change or improve, and if it does it will happen at comparatively glacial speeds.
What this means is that for the cloud to continue to work and grow with the increasing number of people who want to use it, we need to have more intelligence in the intervening connections between (and including) the client and service (or peers). This includes not just gateways and routers, but probably more importantly mobile devices. Many people are now using mobile hardware (phones, pads etc.) to connect to cloud services so adding intelligence there makes a lot of sense.
Mobile also has another role to play in the evolution of the cloud. As I've said before, and presented elsewhere, ubiquitous computing is a reality today. I remember back in 2000 when we (HP) and IBM were talking about it, but back then we were too early. Today there are billions of processors, hundreds of millions of pads, 6 billion phones etc. Most of these devices are networked. Most of them are more powerful than machines we used a decade ago for developing software or running critical services. And many of them are idle most of the time! It is this group of processors that is the true cloud and needs to be encompassed within anything we do in the future around "cloud".
Friday, October 26, 2012
NoSQL and transactions
I've been thinking about ACID and non-ACID transactions for a number of years. I've spent almost as long working in the industry and standards trying to evolve them to cater for environments where strict ACID transactions are too much. Throughout all of this I've been convinced that transactions are the right abstraction for many of the fault tolerance, reliability and consistency requirements. Over the years transactions have received bad press in some quarters, sometimes from people who don't understand them, over use them, or don't really want to have to implement them. At times various waves of technology have either helped or hindered the adoption of transactions outside of the traditional database; for instance some NoSQL efforts eschew transactions entirely (ACID and extended) citing CAP when it's not always right to do so.
I think a good transactions implementation should be at the core of all middleware platforms and databases, because if it's well thought out then it won't add overhead when it's not needed and yet provides obvious benefits when it is. It should be able to offer a wide range of transaction models (well at least more than one) and a model that makes it easier to reason about the correctness and consistency of applications and services developed with it.
At the moment most NoSQL or BigData solutions either ignore transactions or support ACID or limited ACID (only in the scope of a single instance). But it's nice to see a change occurring, such as seen with Google's Spanner work. And as they say in the paper: "We believe it is better to have application programmers deal with performance problems due to over use of transactions as bottlenecks arise, rather than always coding around the lack of transactions."
And whilst I agree with my long time friend, colleague and co-author on RDBMS versus the efficacy of new approaches, I don't think transactions are to be confined to the history books or traditional back-end data stores. There's more research and development that needs to happen, but transactions (ACID and extended) should form a core component within this new infrastructure. Preconceived notions based on overuse or misunderstanding of transactions shouldn't disuade their use in the future if it really makes sense - which I obviously think it does.
Wednesday, September 19, 2012
Travel woes
But the passengers who annoy me the most are those idiots who throw bags into the overhead lockers and rely on the door to keep things in! Then when someone else opens it, guess who the bags land on?! And when the guilty party simply states "Oh, I didn't realise", it really doesn't help! Look, if you didn't realise then you really should go back to school and learn about gravity! The next person who does that is likely to get more than harsh words from me.
Tuesday, September 04, 2012
Coming or going?
Sunday, August 26, 2012
Farewell Neil Armstrong
But I wanted to say a bit more. I was only 3 when we first landed on the moon. I'm told you shouldn't really be able to remember things that far back, or when you're that young, but I do: we had a black-and-white TV and I recall sitting on the floor of the living room watching the landing. Whether it would have happened with or without that moment, from then on I always had science, astronomy and space flight in my mind. Whether it was reading about black holes, rockets, time dilation or science fiction, or going to university and studying physics and astrophysics, they all pushed me in the same direction.
Landing on the moon was a pivotal event for the world and also for me personally. And Neil Armstrong was the focus of that event. I never met him, but for the past 40+ years I've felt his influence on many of the things I've done in my life. Thanks Neil!
Sunday, August 12, 2012
JavaOne 2012
carousel at the top of the JavaOne 2012 home page. Here's my schedule in case anyone wants to meet up or listen to a session:
Session ID: CON4385
Session Title: Dependability Challenges for Java Middleware
Venue / Room: Parc 55 - Cyril Magnin II/III
Date and Time: 10/1/12, 15:00 - 16:00
Session ID: CON10656
Session Title: JavaEE.Next(): Java EE 7, 8, and Beyond
Venue / Room: Parc 55 - Cyril Magnin II/III
Date and Time: 10/3/12, 16:30 - 17:30
Session ID: CON4367
Session Title: Java Everywhere: Ready for Mobile and Cloud
Venue / Room: Parc 55 - Market Street
Date and Time: 10/3/12, 11:30 - 12:30
Monday, August 06, 2012
Tower of Babel
Basic - various dialects such as Commodore, zx80, BBC.
Pascal.
C.
6502 machine code.
Lisp, Forth, Prolog, Logo.
68000 machine code and others ...
Pascal-w, Concurrent Euclid, Occam, Ada, Smalltalk-80.
Haskell.
C++, Simula.
Java, Python.
D, Erlang.
C#
Io, Ruby, Ceylon (still a work in progress), Scala, Clojure.
There are probably others I've forgotten about. Truth be told, over the years I've forgotten much of several of the ones above as well! But now I've found the books again, I'm going to refresh my memory.
Thursday, August 02, 2012
Gossip and Twitter
HP missed the Android boat
I'm just back from my annual vacation to visit the in-laws in Canada. Apart from the usual things I do there, such as fishing, diving and relaxing by the pool under 30 Centigrade temperatures with not a single cloud in the sky, I usually end up spending some time at technical support for the extended family. This time one of the things I ended up doing was something I wanted to do for myself earlier this year: install Android on an HP TouchPad. When HP ditched the TouchPad I tried to get hold of one of them when they were cheap (about $100); not for WebOS but because the hardware was pretty good. Unfortunately I couldn't get hold of one, but my mother-in-law did and she's suffered under the lack of capabilities and apps ever since.
So I installed ICS on the TouchPad relatively easily and the rest, as they say, is history. Apart from the camera not working (hopefully there'll be a patch eventually), the conclusion from my in-law is that it's a completely new device. And after having used it myself for a few days, I have to agree. Even 8+ months after it was released, the TouchPad ran Android as smoothly as some of the newer devices I've experienced. I think it's a real shame that HP decided to get out of the tablet business (at least for now) with an attitude that it either had to be WebOS or nothing. I can also understand the business reasons why they wanted to get value out of the Palm acquisition. But I do think they missed a great opportunity to create a wonderful Android tablet.
Monday, June 18, 2012
Worried about Big Data
I've been spending quite a lot of time thinking about Big Data over the past year or two and I'm seeing a worrying trend. I understand the arguments made against traditional databases and I won't reiterate them here. Suffice it to say that I understand the issues behind transactions, persistence, scalability etc. I know all about ACID, BASE and CAP. I've spent over two decades looking at extended transactions, weak consistency, replication etc. So I'm pretty sure I can say that I understand the problems with large scale data (size and physical locality). I know that one size doesn't fit all, having spent years arguing that point.
As an industry, we've been working with big data for years. A bit like time, it's all relative. Ten years ago, a terabyte would've been considered big. Ten years before that it was a handful of gigabytes. At each point over the years we've struggled with existing data solutions and made compromises or rearchitected them. New approaches, such as weak consistency were developed. Large scale replication protocols, once the domain of research, became the industrial reality.
However, throughout this period there were constants in terms of transactions, fault tolerance and reliability. For example, whatever you can say against a traditional database, if it's been around for long enough then it'll represent one of the most reliable and performant bits of software you'll use. Put your data in one and it'll remain consistent across failures and concurrent access with a high degree of probability. And several implementations can cope with several terabytes of informations.
We often take these things for granted and forget that they are central to the way in which our systems work (ok you could argue chicken-and-egg). They make it extremely simple to develop complex applications. They typically optimise for the failure case, though, adding some overhead to enable recovery. There are approaches which optimise for the failure free environment, but they impose and overhead on the user who typically has a lot more work to do in the hopefully rare case of failures.
So what's this trend I mentioned at the start around big data? Well it's the fact that some of the more popular implementations haven't even thought about fault tolerance, let alone transactions of whatever flavour. Yes they can have screaming fast performance, but what happens when there's a crash or something goes wrong? Of course transactions, for example, aren't the solution to every problem, but if you understand what they're trying to achieve then at some point somewhere in your big data solution you'd better have an answer. And "roll your own" or "DIY" isn't sufficient.
This lack of automatic or assistive fault tolerance is worrying. I've seen it before in other areas of our industry or research and it rarely ends well! And the argument about it not being possible to provide consistency (whatever flavour) and fault tolerance at the same time as performance doesn't really cut it in my book. As a developer I'd rather trade a bit of performance, especially these days when cores, network, memory and disk speed are all increasing. And again, these are all things we learnt through 40 years of maintaining data in various storage implementations, albeit mostly SQL in recent times. I really hope we don't ignore this experience in the rush towards the next evolution.
Sunday, June 17, 2012
Software engineering and passion
I was speaking with some 16 year old students from my old school recently and one of them told me that he wanted to go to university to become a software engineer. He's acing all of his exams, especially maths and sciences as well as those topics that aren't really of interest. So definitely a good candidate. However, when I asked what he had done in the area of computing so far, particularly programming, the answer was nothing.
This got me thinking. By the time I was his age, I'd been programming for almost four years, written games, a basic word processor and even a login password grabbing "utility". And that's not even touching on the electronics work I'd done. Now you could argue that teaching today is very different than it was 30 years go, but very little of what I did was under the direction of a teacher. Much of it was extra curricula and I did it because I loved it and was passionate enough to make time for it.
Now maybe I've been lucky, but when thinking about all of the people I've worked with over the years and work with today, I'd say that they all share that passion for software engineering. Whether they've only been in the industry for a few years or for several decades, the passion is there for all to see. Therefore, I wonder if this student had what it takes to be a good engineer. But as I said, maybe I'm just lucky in the people with whom I've been able to work, as I'm sure there are those software engineers for whom it really is just a day job and they are still good at that job. But I'd still hate to not have the passion and enthusiasm for this work!