Saturday, December 31, 2016

RPCs, groups and multicast

This is first entry in the series I mentioned earlier. I've tried to replace the references with links to the actual papers or PhD theses where possible, but some are not available online.

Remote Procedure Call

The idea behind the Remote Procedure Call (RPC) [Birrell 84] is the fact that conventional procedure calls are well known and are a well understood mechanism for the transfer of data and control within a program running on a single processor. When a remote procedure is invoked, the calling process is suspended, any parameters are passed across the network to the node where the server resides, and then the desired procedure is executed. When the procedure finishes, any results are passed back to the calling process, where execution resumes as if returning from a local procedure call. Thus the RPC provides the system or application programmer a level of abstraction above the underlying message stream. Instead of sending and receiving messages, the programmer invokes remote procedures and receives return values.

The figure shows a client and server interacting via a Remote Procedure Call interface. When the client makes the call it is suspended until the server has sent a reply. To prevent the sender being suspended indefinitely the call can have a timeout value associated with it: after this time limit has elapsed the call could be retried or the sender could decide that the receiver has failed. Another method, which does not make use of timeouts in the manner described, instead relies on the sender and receiver transmitting additional probe messages which indicate that they are alive. As long as these messages are acknowledged then the original call can continue to be processed and the sender will continue to wait.

Groups

[ANSA 90][ANSA 91a][Liang 90][Olsen 91] describe the general role of groups in a distributed system. Groups provide a convenient and natural way to structure applications into a set of members cooperating to provide a service. They can be used as a transparent way of providing fault tolerance using replication, and also as a way of dividing up a task to exploit parallelism.

A group is a composite of objects sharing common application semantics as well as the same group identifier (address). Each group is viewed as a single logical entity, without exposing its internal structure and interactions to users. If a user cannot distinguish the interaction with a group from the interaction with a single member of that group, then the group is said to be fully transparent.

Objects are generally grouped together for several reasons: abstracting the common characteristics of group members and the services they provide; encapsulating the internal state and hiding interactions among group members from the clients so as to provide a uniform interface (group interface) to the external world; using groups as building blocks to construct larger system objects. A group may be composed of many objects (which may themselves be groups), but users of the group see only the single group interface. [ANSA 90] refers to such a group as an Interface Group.

An object group is defined to be a collection of objects which are grouped together to provide a service (the notion of an abstract component) and accessible only through the group interface. An object group is composed of one or more group members whose individual object interfaces must conform to that of the group.

Interfaces are types, so that if an interfacex has typeXand an interfacey has type Y, andX conforms to Y, thenx can be used wherey is used. This type conformance criteria is similar to that in Emerald [Black 86]. In the rest of this thesis, we shall assume for simplicity that a given object group is composed of objects which possess identical interfaces (although their internal implementations could be different).

The object group concept allows a service to be distributed transparently among a set of objects. Such a group could then be used to support replication to improve reliability of service (a replica group), or the objects could exploit parallelism by dividing tasks into parallel activities. Without the notion of the object group and the group interface through which all interactions take place, users of the group would have to implement their own protocols to ensure that interactions with the group members occur consistently e.g., to guarantee that each group member sees the same set of update requests.

By examining the different ways in which groups are required by different applications, it is possible to define certain requirements which are imposed on groups and the users of groups (e.g., whether collation of results is necessary from a group used for reliability purposes). [ANSA 9 la] discusses the logical components which constitute a generic group, some of which may not be required by every group for every application. These components are:
  • an arbiter, which controls the order in which messages are seen by group members.
  • a distributor/collator, which collates messages going out of the group, and distributes messages coming into the group.
  • member servers, which are the actual group members to which invocations are directed.
For some applications collation may not be necessary e.g., if it can be guaranteed that all members of a group will always respond with the same result. As we shall see later, if the communication primitives can guarantee certain delivery properties for messages, then arbitration may also not be necessary. In general, all of these components constitute a group. In the rest of this thesis the logical components will not be mentioned explicitly, and the term group member will be used to mean a combination of these components.

Multicast Communication

Conventional RPC communication is a unicast call since it involves one—to—one interaction between a single client and a single server. However, when considering replication it is more natural to consider interactions with replica groups. Group communication is an access transparent way to communicate with the members of such a group. Such group communication is termed multicasting [Cheriton 85][Hughes 86].

Multicast communication schemes allow a client to send a message to multiple receivers simultaneously. The receivers are members of a group which the sender specifies as the destination of the message. A broadcast is the general case of a multicast whereby instead of speci!ring a subset of the receivers in the system every receiver is sent a copy.
Most multicast communication mechanisms are unreliable as they do not guarantee that delivery of a given message will occur even if the receiver is functioning correctly (e.g., the underlying communication medium could lose a message). When considering the interaction of client and replica group (or even replica group to replica group communication) such unreliable delivery can cause problems in maintaining consistency of state between the individual replicas, complicating the replication control protocol (if one replica fails to receive a given state-modifying request but continues to receive and respond to other requests, this resulting state divergence could result in inconsistencies at the clients). Thus, it is natural to consider such group-to-group communication to be carried out using reliable multicasts, which give certain guarantees about delivery in the presence of failures. These can include the guarantee that if a receiver is operational then the message will be delivered even if the sender fails during transmission, and that the only reason a destination will not receive a message is because that destination has failed. By using a reliable multicast communication protocol many of the problems posed by replicating services can be handled at this low level, simplifying the higher level replica consistency protocol.

Some historical blogging

Over Christmas I was doing some cleaning up of my study and came across a copy of my PhD. Any excuse to stop cleaning, so I took some time to skim through it and thought some of the background research I had included might still be useful today for a new audience. Now although the PhD is available for download, it's not exactly easily searchable or referenced, so the next few entries will try to rectify some of that.

Saturday, October 15, 2016

Architecture First

We've all heard the story of The Three Little Pigs. We all know that building houses from straw isn't a great idea if your use case is to survive a wolf's breath. We all know that sticks aren't that much better either. Of course brick is a far better medium and the pig that built from it survived. Now what the original story doesn't say, probably because it's a little more detail than children really need to understand, is that before building their houses all of the pigs went to architecture school and studied at length about arches, the truss, stylobates and other things necessary to design and then construct buildings from various materials. Now it's likely the pig that used straw didn't listen when they were talking about the tensile strength of straw and the pig that used sticks ignored the warnings that they're really only good for building forest dens (or birds nests). But if they'd been listening as much as their brother then they'd have known to give him a hand with the bricks and not waste their time.

Now even the best architects make mistakes. It's not possible to suggest a single reason for these though. Sometimes it's because the architect didn't understand the physical properties of the material being used (a bit like the straw and stick pigs). Sometimes it's because they didn't fully understand the environment within which their building would reside. But fortunately for us, the vast majority of buildings are successful and we feel safe to be within them or around them.

You may be wondering why I'm talking about architecture here. I think I've mentioned this before but my biggest worry about the rush towards microservices is the lack of focus, or discussions, around architectures. I'm sure many of the established groups that have been building systems with services (micro or not) understand their architectures and the impact service-orientation has on it, or vice versa. But I'm also convinced that many groups and individuals who are enamoured by the lustre of microservices aren't considering architectures or the implications. That worries me because, as I said at the JavaOne 2016 presentation I gave recently, launching into developing with microservices without understanding their implications and the architecture is neither going to solve any architecture problems you may have with your existing application nor will it result in a good/efficient distributed system. In fact it's probably the worst thing you could do!

Even if you've got "pizza teams", have a culture that has embraced DevOps, have fantastic tools supporting CI and CD, if you don't understand your current and new architecture none of this is really going to help you. That's not to suggest those things aren't important after you've done your architecture design and reviews because they clearly are. The better they are the quicker and more reliably you can build your new application using microservices and manage it afterwards. But you should never start such an effort just because you've got the right tools, building blocks and support infrastructure. It could be argued that they're necessary, but they most certainly aren't sufficient. It needs to start with architecture.

Update: I should also have mentioned that after any architecture review you find that you don't need many, or any, microservices then you shouldn't feel a sense of failure. A good architect (software or not) knows when to use and when not to use things. Remember, it's quite probable the pig who used the bricks considered straw and sticks first but decided they just weren't right this time around.

Friday, September 02, 2016

Microservices and distribution

OK so following on from my previous article on inferring the presence of microservices within an architecture, one possibility would be to view the network traffic. Of course it's no guarantee, but if you follow good microservices principles that are defined today, typically your services are distributed and communicating via HTTP (OK, some people say REST but as usual they tend to mean HTTP). Therefore, if you were to look at the network traffic of an "old style" application (let's not assume it has to be a monolith) and compare it with one that has been re-architected around microservices, it wouldn't be unreasonable to assume that if you saw a lot more HTTP requests flowing then microservices are being used. If the microservices are using some other form of communication, such as JMS, then you'd see something equivalent but with a binary protocol.

We have to recognise that there are a number of reasons why the amount of network traffic may increase from one version of an application to another, so it could be the case that microservices are not being used. However, just as Rutherford did when searching for the atomic nucleus and which all good scientists follow, you come up with a theory that fits the facts and revise it when the facts change. Therefore, for simplicities sake, we'll assume that this could be a good way to infer microservices are in place if all other things remain the same, e.g., the application is released frequently, doesn't require a complete re-install/re-build of user code etc.

Now this leads me to my next question: have you, dear reader, ever bothered to benchmark HTTP or any distributed interaction versus a purely local, IPC, interaction? I think the majority will say Yes and of those who haven't the majority will probably have a gut instinct for the results. Remote invocations are slower, sometimes by several orders of magnitude. Therefore, even ignoring the fault tolerance aspects, remote invocations between microservices are going to have a performance impact on your application. So you've got to ask: why am I doing this? Or maybe: at what point should I stop?

Let's pause for a second and look back through the dark depths of history. Back before the later 19th Century/early 20th Century, before electrification of factories really took off, assembling a product from multiple components typically required having those components shipped in from different parts of the country or the world. It was a slow process. If something went wrong and you got a badly built component, it might prevent assembly of the entire product until a new version had been sourced.

In the intervening years some factories stayed with this model (to this day), whereas others moved to a production factory process whereby all of the pieces were built on site. Some factories became so large, with their constituent pieces being built in their own neighbouring factories that cities grew up around them. However, the aim was that everything was built in one place so that mistakes could be rectified much more quickly. But are these factories monoliths? I'm not so sure it's clear cut simply because some of the examples I know of factories like this are in the Japanese car industry which has adapted to change and innovation extremely well over the years. I'd say these factories matured.

Anyway, let's jump back to the present day but remembering the factory example. You could imagine that factories of the type I mentioned evolved towards their co-located strategy over years from the distributed interaction approach (manufacturers of components at different ends of the planet). They managed to evolve because at some point they had all of the right components being built but the impediment to their sales was time to market or time to react. So bringing everything closer together made sense, Once they'd co-located then maybe every now and then they needed to interact with new providers in other locations and if those became long term dependencies they probably brought them "in house" (or "in factory").

How does this relate to microservices and the initial discussion on distributed invocations? Well whilst re-architecting around microservices might help your application evolve and be released more frequently, at some point you'll need to rev the components and application less and less. It becomes more mature and the requirements for change drop off. At that stage you'd better be asking yourself whether the overhead of separate microservices communicating via HTTP or even some binary protocol, is worth it. You'd better be asking yourself whether it's not better to just bring them all "in house" (or in process) to improve performance (and probably reliability and fault tolerance). If you get it wrong then of course you're back to square one. But if you get it right, that shouldn't mean you have built a monolith! You've just built an application which does it's job really well and doesn't need to evolve much more.

Monday, August 29, 2016

Microservices and subatomic particles - an end-user perspective?

For a while now we've seen various debates around microservices, such as how they compare to SOA, whether the emphasis should be on size (micro), whether HTTP (and REST) is the preferred communication style, where and why you should adopt them as well as when you shouldn't? The list goes on and on and I've participated in a few of them.

Recently at work we've been focusing on how best to consider microservices within an existing architecture, i.e., how, why and when to breakdown so-called monoliths into microservices. We've had a number of our teams involved in these discussions, including Vert.x, WildFly Swarm and OpenShift. We've made great progress and this article isn't about that work - I'll leave it to the various teams and others to report once it's ready.

However, during this work I also went on vacation and that gave me time to ponder on life, the universe and everything microservices related! During the time away I kept coming back to two fundamental questions. The first: why use microservices? The second: how can end-users tell if they're being used to (re-) construct (distributed) applications? Much of what we've heard about microservices has been from the perspective of developers who will use microservices, not necessarily the end-user of (re-)architected applications. And of course you're probably asking a third: how does all of this relate to subatomic particles? Patience and all will be revealed.

To answer the first question, there are a lot of reasons why people, vendors, analysts etc. suggest you should consider microservices, either as a building block for new applications or, as seems more common at the moment, as a way of refactoring your existing application or service(s) which may be monolithic in nature. At the core though is the requirement to have an architecture which allows for constituent components to be developed, revised and released independently of the entire application. The so-called "Pizza Team" approach, for instance.

This then leads us nicely to the second question: how can you tell an application has been developed, or re-architected, using microservices? If you're a user of a service or application, chances are that unless the source code is available to review and you've got that inclination, "microservices enabled" isn't necessarily going to be one of the slogans used to market it. And in fact should you care? Ultimately what you're probably more interested in is a mixture of things such as cost, reliability, performance and suitability for purpose. But let's assume you do want to know. How can you tell?

Well this is where the subatomic particles come in. Given my university degree majored in physics and computing I share an equal love for both and at times when my mind wanders I like to try to see similarities between the two areas. In the past, for instance, I've used Heisenberg's Uncertainty Principle to describe weak consistency transactions protocols. This time around I was again recalling Heisenberg; those of you who have also studied physics or have a passing interest will know that the wave-particle duality of subatomic particles cannot be view directly but can be inferred, for instance using Young's Slit experiment and firing a single "particle" at two slits to observe an interference pattern which is reminiscent of those produced by wave interference. This is a pretty extreme example of how we can infer the properties of particles we cannot view directly. Others exist, including Rutherford's original experiment to infer the existence of the atomic nucleus; I'll leave that as an exercise to the interested reader, but will say it's a fascinating area of science.

Now where all of this comes full circle is that if you're an end-user of some piece of software that has been well architected and does its job, is released frequently enough for you to do your work efficiently and basically doesn't get in the way, could you tell if it was architected or re-architected using microservices? The answer in this case is most probably no. But on the flip side, suppose you've been using an application or service which is released too slowly for you (e.g., bug fixes take months to arrive), and maybe requires a rebuild of your code each time it is released. Then let's assume things change and not only do you get updates on a daily basis but they often fit seamlessly in to your own application usage. Does this mean that the developers have switched to microservices? Unfortunately the answer is no less definitive than previously because whilst a correct use of microservices would be an answer, there are other approaches which could give the same results - despite what you may have read, good software development has existed for decades.

Therefore, without looking at the code how can an end-user know whether or not microservices are being used and why is that important? It's important because there's a lot of hype around microservices at the moment and some people are making purchasing decisions based on whether or not they are present, so you probably do need some way to confirm. Architecture diagrams are great but they're no substitute for code. But if you can't see the code, it's tricky to infer one way or the other. However, on the flip side maybe as an end-user you really shouldn't care as long as you get what you want from the application/service? Good architectures and good software architects win out in the end using a variety of techniques.

Note: yeah, the other obvious analogy between microservices and subatomic particles could be that maybe microservices are the smallest divisible aspects of your application that make sense; you can't really re-factor your code smaller than a microservice in just the same way that you can't go beyond sub-atomic particles. However, since there are things smaller than subatomic I didn't want to go there.

Saturday, May 21, 2016

Fault tolerance and microservices

A while ago I wrote about microservices and the unit of failure. At the heart of that was a premise that failures happen (yes, I know, it's a surprise!) and in some ways distributed systems are defined by their ability to tolerate such failures. From the moment our industry decided to venture into the area of distributed computing there has been a need to tackle the issue of what to do when failures happen. At some point I'll complete the presentation I've been working on for a while on the topic, but suffice it to say that various approaches including transactions and replication have been utilised over the years to enable systems to continue to operate in the presence of (a finite number of) failures. One aspect of the move towards more centralised (monolithic?) systems that is often overlooked, if it is even acknowledged in the first place, is the much more simplified failure model: with correct architectural consideration, related services or components fail as a unit, removing some of the "what if?" scenarios we'd have to consider otherwise.

But what more has this got to do with microservices? Hopefully that's obvious: with any service-oriented approach to software development we are inherently moving further into a distributed system. We often hear about the added complexity that comes with microservices that is offset by the flexibility and agility they bring. When people discuss complexity they tend to focus on the obvious: the more component services that you have within your application the more difficult it can be to manage and evolve, without appropriate changes to the development culture. However, the distributed nature of microservices is fundamental and therefore so too is the fact that the failure models will be inherently more complex and must be considered from the start and not as some afterthought.

Thursday, May 19, 2016

Serverless? Really?

Our industry appears to be going through a phase of giving new or not so new approaches short names which though catchy are so inaccurate as to be meaningless and possibly dangerous. These include "containerless", when containers of one sort or another are clearly present. Now we have "serverless ".

Look, I absolutely get what's behind the term: cloud has made it so that developers don't need to worry about deploying databases, web severs or whatever is needed to run their application and also takes care of scaling and fault tolerance. But servers and databases and other infrastructure are still there because your application still needs them; just because you don't see them doesn't mean they're not there.

Wednesday, April 27, 2016

Saturday, April 23, 2016

Types of microservices

I've started to post a few things over on the new Red Hat Developer Blog. My first entry was about microservices.

Thursday, April 21, 2016

You keep using that word (REST) and I don't think it means what you think it does

A long time ago in a galaxy far, far away ... OK, not quite. But some time ago our industry spent a lot of time and effort on discussion the pros and cons of REST, SOA, SOAP (!) etc. I even had a few things to say on the subject myself. To misquote Jeff Wayne, "minds immeasurably superior to my own" on the topic of REST and HTTP at the time had a lot more to say and do in this area. It's something which is easily Googled these days. Covered in books too. Probably even several video master classes on the topic of REST and HTTP, let alone their relevance to SOA. And yet it seems that some people either have a very short memory, didn't understand what was said, or perhaps didn't do their homework?

I'm specifically talking about how REST plays into the recent microservices wave. Yes, I think there are some problem areas with microservices and yes, one of them is the dogmatic assertion by some that REST (they tend to really mean HTTP) is mandatory. I don't believe that. I do believe that different message exchange patterns may be suitable for a range of microservices and to limit ourselves to just one (HTTP) does not make sense.

Oh and yes, it still frustrates me to this day when people talk about REST and they're really talking about HTTP - my only consolation here is that if I find it frustrating it must really annoy the h*ll out of the RESTafarians! Not all REST is HTTP and likewise not everything which uses HTTP is necessarily REST. Simple, huh? Well you'd think ...

Anyway, putting that aside, what's got me more frustrated recently is that some people are suggesting that REST (really HTTP) can't do async for microservices and therefore can prevent you breaking apart your monolith. I'm not even going to attempt to explain in detail here how that is wrong except to suggest that a) those people should go and read some InfoQ articles from the early 2000's (yes, even theserverside had things to say on the topic), b) do a Google search, c) read something from my friend/colleague Jim WebberSavas and others on the topic of REST (and maybe Restbucks - hint, hint), d) in case they're confused between REST and HTTP, maybe go and read the HTTP response codes and look at those in the early 200's range (again, another hint). As I said above, I'm not suggesting there aren't some issues with REST/HTTP for microservices. And as I've also said over the last decade or so, maybe there are some issues with building some types of distributed systems with REST/HTTP. But this isn't one of them!

Unknown

And this then brings me to a related topic. And if he hadn't been saying it around REST, I'm pretty sure Inigo Montoya would've been saying "You keep using that word. I do not think it means what you think it means" about "asynchronous". Yes, some of the authors try to make the distinction between a blocking call within a single address space and a one way interaction with a service across the network, but that really doesn't instil me with confidence that they know what they're talking about. If they threw in a bit of Fischer, Lynch and Patterson, rather than the oft overused reference to CAP, then maybe I'd be slightly less concerned. But that'd require some homework again, which they don't appear to want to do! Oh well, good luck to those who follow them!

Note, I have deliberately not included many links to things in the post in the hopes it will encourage the reader to follow up.

Wednesday, April 06, 2016

Micromonoliths?

I'm not even sure it's a word, but I wrote something on micromonoliths elsewhere - basically just some words on architecture.

Some cross postings on microservices

As I mentioned earlier, I've had some time on holiday recently and spent some time musing on microservices amongst other things. I've written a couple of articles over on my JBoss.org blog, one on microservices and co-location, and one about the mad rush to make everything a microservice. As Rod Serling often said: submitted for your approval.

Monday, April 04, 2016

Poor scientific method

One of the nice things about being on holiday is that the mind can wander down paths that might otherwise not be trodden, particularly because to do so typically needs more available time. This holiday I spent some of that time pondering microservices, amongst other things. As I've mentioned and been quoted elsewhere, I'm not against the fundamental concepts behind microservices architectures (MSA) - I see them as embodying an evolution of services-based architectures which we've seen happening over several decades. And yet I have a few concerns or niggles about the way in which some groups are positioning it as so significantly different that it represents a fundamental new approach or wave.

It wasn't until I read the recent NIST document that I managed to put my finger on at least one of these concerns: they're making data fit their own definitions of SOA and MSA, whilst conveniently ignoring those facts which don't fit. Any good scientist knows that goes against scientific method, whereby we come up with a theory based upon all of the available data and that theory lasts until some new data appears which blows a hole in it (yes, I'm paraphrasing). The NIST document is particularly guilty of this by choosing a definition of SOA which is conveniently at odds with their definition of MSA - it's not even the one that the standards, such as the OASIS SOA Reference Model, define! The table they have to compare and contrast is so badly flawed - go read the SOA-RM and look at the NIST MSA definition to see how close SOA and MSA are!

Look, I'm not suggesting there's nothing different in MSA than what we've been doing in the past with service oriented architectures. But let's please be objective, stick to the facts and define theories or models based upon all of the available data. Then also be prepared to change if new data appears. But please don't select data to match your own preconceived notions. In our more fully connected world there's really no excuse for being selective of the available data or ignorant of what is available!

Sunday, April 03, 2016

Holiday thoughts ...

I just got back from a week of holidaying with family in Tenerife. We did lots of great things together, such as whale watching, snorkeling, visiting the local volcano (great mountain roads with barely enough space for one car let alone two lanes!) and generally soaking up the sunshine! It also gave me an opportunity to catch up on some articles and work related videos I'd been saving up on for a while, which resulted in me also being able to write down some of my own thoughts on a few related areas. I'm taking a bit of a down-day today before back to work tomorrow so it may take me a few days to upload the entries here or to my JBoss.org account.

Tuesday, January 26, 2016

Frameworks versus stacks - cross posting

Wasn't really sure if I should put this on my JBoss.org blog or here, so cross posting.

Sunday, January 24, 2016

SRC-IoT 2016

I have had the honour of working with Professor Paul Watson to create the System Research Challenges Workshop. For the first year we focussed on IoT and it's possible we may keep that high-level theme next time. But it really great to see and hear the attendees embrace the underlying issues which are presented by large-scale distributed systems, which IoT embodies: reliability, fault tolerance, security, trustworthiness, data management etc. Of course all of the presentations had an IoT focus, but even there we had a wide range of examples, from field devices through gateways and including wearables (yes, there were a lot of side discussions about whether some of these IoT devices would ever really take off.)

We had attendees from industry (e.g., Red Hat, IBM and ARM) as well as SMEs and arcademia (e.g., Newcastle University, Lyon and Cambridge). It was a great mix of practical and theoretical, highlighting some of the challenges we have ahead of us in research and development. And as with many of these kinds of events, it was the discussions around the sessions that generated as much interesting conversation as during the presentations.

As well as 2 days of 30 minute presentations (maybe we'll try and get the agenda published somewhere), we also held a 2 hour lightning talk session on the first evening. Here anyone attending, whether they had a formal presentation or not during the event, was encouraged to present on a topic for 5 minutes. There hadn't been much preparation for this beforehand, so there was a little concern about whether we'd be able to fill the time. We needn't have worried - we could have gone much longer than the allotted 2 hours. It was a lot of fun. In fact my favourite talk of the entire event was probably here when Jonathan Halliday gave a presentation onf Big Data over the centuries, going back hundreds of years and managing to also touch on open source 400 years ago!

In conclusion, I thought the event went well. I'm hoping we can do it again next year, perhaps with the same theme or maybe we need to change it. We'll know closer to the time.

Friday, January 01, 2016

Elite

Back when I was just starting at university Elite came out for the Beeb. I remember going into my local town and walking to the computer shop to buy the game, then getting the slow bus home, waiting with apprehension for the time when I could put the tape into the cassette drive and slowly load up the game! By today's standards the graphics were basic, but in 1984 they were ground breaking. And I spent the next couple of years playing Elite at every opportunity. Even when I upgraded to the Atari 520 STM I longed for Elite on it but it never arrived.

I learned about Elite Dangerous back in 2012 and that it was available on Steam earlier this year. I wasn't a Steam player at the time, preferring to do my gaming on a PlayStation 3 or 4, or perhaps an XBox 360. However, I made the plunge this morning and for my first game purchase of 2016 I decided to install Steam on my laptop and buy Elite. I'm sure I'll have hours of fun ahead if it's anything like the original game!