Friday, March 15, 2019

Why Quarkus?

One size rarely fits all. I've written about this a number of times, for instance on the topic of extended transactions. In the world of building architecture that rule applies too or we'd all be living in caves! The Red Hat/JBoss middleware developer strategy for many years has recognised this fact and we support the frameworks and associated stacks for a wide variety of developers to use when building the next generation of microservice-based applications. Because polyglot is the normal situation today it does not just focus on Java, hence why Node.js is in there. Furthermore, not all new applications will be build from scratch; there has to be a way for existing applications to evolve and similarly for existing developers to evolve their skills rather than having to start from scratch.

As anyone who has been in this industry long enough knows, existing investments remain with us for years (decades in many cases). We can’t just throw them out the window when something new comes along. Neither can we throw the people out the window! Therefore, appealing to so-called brownfield developers is critical because it allows them to leverage their existing skills and key knowledge which likely has built up over many years. Brownfield can include:
  • building new services and applications which require/leverage/integrate with existing systems but where the new components are developed in some new framework or approach;
  • evolving existing systems, components and services built with mature (I hate "legacy") frameworks and stacks towards newer styles. One example of this would be the evolution of CORBA to J2EE.
So whilst it might look like adding Quarkus to the mix confuses things, it really doesn’t, or at least it shouldn’t and here’s why: there’s a spectrum of developers and applications and our approach attempts to cover the important areas of that spectrum. Specifically there's no one tool or framework right for the entire application or developer and you’ll need some or all of Quarkus (remember, it combines [will combine] serverless with Camel-K/Fuse, our core services on Kubernetes, such as [xPaaS] SSO, transactions, messaging, and uses Eclipse Vert.x for reactive and Eclipse MicroProfile/SmallRye), with EAP/WildFly if you need Java EE/Jakarta EE. And crucially, wherever your app is developed or deployed, they will be able to communicate and interact seamlessly as if developed with the same technology stack.

We’ll be talking more about how this all comes together in the coming weeks and months but another important point to remember is that we’re driving this upstream first, as we always do within JBoss/Red Hat. In fact Ken Finnigan's recent blog about the future of Thorntail is one of those important next steps. So if you want to get involved and help influence the technical direction please do! There are multiple entry points, i.e., you don’t need to start with Quarkus if you are more interested in Keycloak, or WildFly or Narayana (yeah!)

Tuesday, October 30, 2018

Red Hat and IBM ... Really?

I’m writing this here on my personal blog because it’s not an official statement from Red Hat and I wanted everyone to know that what I’m putting here is my honest opinion and not clouded by the fact I work for Red Hat. I’m trying to be objective here, building on my scientific background.

I’ve had a couple of days to digest the news about Red Hat acquiring IBM. That is the way round, right? 😉 I always like to say that JBoss acquired Red Hat in 2006 too 🙂 Anyway, I’m excited by the opportunity this gives. I know many people at different levels throughout IBM and they are good people and have a great deal of respect for Red Hat in general and specifically, due to the areas where we interact, the middleware/Java teams, products and projects.

Yes, maybe there are other companies who would be “better” but I can think of a lot of others who would be “worse”. I’ve been acquired six times now, including this one, so I know full well what many of my fellow Red Hatters are going through. I know that in this period of uncertainty there will be those  who are worried. That’s only natural. But I’m hoping that over the coming weeks and months we’ll get more details and everyone will be able to see what working as a wholly owned subsidiary of IBM actually means if IBM want to preserve Red Hat.

IBM and Red Hat have a great track record of working together in the middleware space, in areas such as Eclipse MicroProfile, Jakarta EE and Java, then there’s all the other standards work we’ve done over the years. Yes they aren’t Red Hat but then no company every could be. I’ve worked at a number of companies and I think JBoss and Red Hat are the best but when a company spends a third of its market cap to acquire another company that’s almost a merger! It’s in everyone’s interest in IBM to make it succeed and keep the great teams we have built up in Red Hat.  IBM recognises what we all appreciate here: our key assets are the people and not just the code. Therefore, they need to ensure we keep those people happy and together if Red Hat is to continue to succeed. And I think Chris puts it well- IBM is no novice in the open source space.

Recall that only a few years back we had hoped IBM would acquire Sun over Oracle? I have had some contact with friends and colleagues in IBM over the past few days and I can tell you they are excited by the possibility of working together. Collectively we have an opportunity to define the next phase for IBM. And I use the term “we” here because the Red Hat communities have been a key part of our success to date and just as this deal succeeds or fails by bringing the employees along, so too does it need the communities: you can’t be an open source company, big or small, without them. Red Hat works in some of the biggest upstream communities there are and I’m sure IBM understands their value lies with collaboration, trust built on relationships between people first and vendors second, and so many other factors that it’s really easy to spoil them if you aren’t careful. Again, as Chris mentions, they “get this” and I have every reason to believe that myself.

I know some people outside the company have been incredibly down on the deal suggesting that IBM will kill or stifle the Red Hat culture, cause us to become unfocused or take the proverbial foot off the pedal. Sure that’s possible, but it’s unlikely because no company has ever acquired a company quite like Red Hat before. Our culture of openness is incredibly strong (didn’t Darth Vader once say “The Red Hat Culture is strong in this one” or am I thinking of something else?) I’m pretty confident to say that it won’t happen because we won’t let it happen. Remember what I said about the assets? Well many of those assets are pretty verbal when they want to be ...  I’ve been on the receiving end once or twice publicly and privately 😉 Also I think some of those people trying to throw cold water on this are the usual Trolls, maybe disgruntled for other reasons, or wanting to try to pull key Red Hat folks away to other companies in the first few hours of uncertainty; not necessarily objective. But they could be right in the long run if Red Hat and IBM can’t work well together - fortunately I think we stand a much better chance of succeeding than failing, if we all stick together.

I’ll finish off by trying to end some speculation I’ve seen in the media about what “lives and dies” after the acquisition. Here they’re talking about software products, clearly! Look, no one knows for sure but I do know that between the two companies we have some of the most influential open source projects and products out there with huge adoption and developer interest. It’s not as clear cut as some people are trying to make out, especially when you look at the future of cloud-native middleware. I have some thoughts on the subject of course and although I haven’t confirmed, I’m sure IBM people do too. We’ll have some good, collaborative discussions to define the way forward, taking everything I’ve already said about developers, engineers, community and customers into account. It’ll be fun and I’m hoping all interested parties, whether or not employees of either company, will participate.

On a  personal level, I intend to remain committed to Eclipse MicroProfile, Jakarta EE, WildFly, Thorntail, Drools, jBPM, 3scale, Fuse, Camel, A-MQ, ActiveMQ, Active MQ Artemis, OpenJDK, JCP EC and all of the countless projects and products we have today. The teams and the code are definitely something of which we should all be proud.

Sunday, January 28, 2018

Eventual consistency and microservices

Since at least the early 1990's our industry and academia have dabbled with eventual consistency. I've written and spoken a bit about eventual consistency around transactions and replication as it applies to scaling systems and also SOA over the years. As Eric wrote many years ago, some of it even made its way into standards. Recently the industry has been looking at how eventual consistency may be required within microservices. For instance, what about Eclipse MicroProfile and transactions (yes, a favourite of mine)? It's possible some of this may make its way into Eclipse EE4J. And now Matt Klein from Lyft has written a nice piece about eventual consistency and networks, which is also worth a read.

Just remember: right tool for right job; just as ACID transactions aren't right for evry use case, strong consistency isn't right for every use case, and hence neither is eventual consistency right for every use case. Think, understand, consider and apply.

Sunday, December 31, 2017

Effectiveness of Replication Strategies

In [Noe 86] a simulation study for the comparison of available copies, quorum consensus, and regeneration was carried out to determine which replication protocol was the most efficient given a specific configuration of distributed system, and a certain set of failure characteristics.

The model was programmed in SIMULA [Birwhistle 73], and assumed a local area network consisting of a number of separate computers interconnected by a communications medium such as an Ethernet, with no communications failures. The parameters used in the simulation, such as crash rates and node load, were obtained from studies of existing distributed systems and from mathematical models, and all parameters were the same for each replication protocol simulated. Crash frequency varied between 100 and 300 days, with repair times having a mean of 7 days. The number of replicated resources ranged from only one copy to having three copies, and the ratio of read requests to write requests varied from a probability of 0.3 up to 0.7, with request frequencies varying from between 50 and 400 requests per day. The number of nodes in the system also varied from 10 to 30. All measured results were taken over a simulated time of 2 years of operation.

Simulation Results 

The quantities calculated from the results were the read and write availability of the replicated service. The read availability was defined and calculated as the total number of successful read requests divided by the total number of read requests. Write availability was similarly defined in terms of write requests.

What was found from the results was that replication provides a significant increase in availability. However, there is little point in going beyond a maximum of two copies. Both the Available Copies and Regeneration techniques provide a substantial increase in availability, raising the value of read and write availability very close to 1.0 i.e., whenever a request is performed upon a replicated resource it will be carried out successfully. There is very little additional gain with either of these protocols in having a maximum of 3 copies of each resource.

The Voting protocol provided less protection than either of the other protocols and would not even be considered until a maximum of 3 copies were used. In such a case the optimal size for a read and write quorum is 2; with a write quorum of 3 the replicated resource performed worse than in the non-replicated case because there are three ways to lose a single copy and destroy the write quorum.

Both Available Copies and Regeneration are preferable to Voting if network partitions are rare, or if measures are added to prevent or reconcile independent updates during partition rejoining. The read and write availability of the Available Copies technique are the same, and remain relatively constant despite changes in the request rate and the number of nodes. Regeneration can be preferable to Available Copies in an unstable environment that suffers from high crash frequencies, with a high number of updates and frequent reconfiguration of the network. Further, Regeneration can equal or surpass the performance of the Available Copies technique only if enough additional storage is supplied to allow regeneration. 

Optimistic and Pessimistic Consistency Control

The replication protocols described previously can all be considered pessimistic with regard to consistency of data in the presence of network partitions. Until a partitioned network is reconnected, it is impossible for nodes on one side of the partition to differentiate between being partitioned and a failure of the nodes on the other side of the partition. As has been described in previous sections, this can have an adverse affect on replicated groups which have also been partitioned, unless some method is provided to ensure that update operations can only be performed consistently on the entire group. Typically, these replication protocols are used in conjunction with atomic actions, and in the event of a network partition either only one partition (in the case of Voting) or no partition is allowed to continue to progress, meaning that any atomic actions that were executing must be aborted to maintain consistency of state between the partitioned replicas. They are pessimistic, using the principle that, if it is not possible to tell definitely that replicas have failed then it is better to do nothing at all. Those protocols which can operate correctly in the presence of a network partition (still maintain consistency of replicas), such as Voting, typically impose an overhead on the cost of performing operations on replicas (in the Voting protocol, the cost of performing a read operation is increased because a quorum of replicas must be obtained).

An optimistic consistency control scheme like those described in [Davidson84][Abbadi 89] take a different approach and allow actions to continue operating even in the event of a partition. When the partition is eventually resolved it must be possible to detect any conflicts that have arisen as a result of the original failure and to be able to resolve them. These protocols assume that it is possible for committed actions to be rolled back (i.e., un—committed). How the detection and resolution of conflicts is performed is system specific e.g., in some systems it must be done manually, whereas in [Davidson 84] a mechanism is described that will allow the system to automate much of the work. 

JBossTS/Narayana blog - a reminder

I haven't been cross-posting links to the JBossTS/Narayana blog recently and it's well worth taking a look at the team have written a number of great articles over the year, particularly around microservices and transactions.

Primary Copy

The Primary Copy [Alsberg 76] mechanism is an implementation of the passive replication strategy, where one copy of an object is designated as the primary, and the other copies become its backups. All write operations are performed on the primary copy first, which then propagates the update to the secondary copies before replying to the request. Reads can be serviced by any replica since they all contain consistent copies of the state. Any inaccessible secondary copies are typically marked as such (perhaps to a name-server) so that they cannot be used as either a future primary or to be read from until they have been brought up-to-date (another method would be physically to remove them from the replica group until they have been updated). If the Primary Copy fails then a reassignment takes place between the remaining copies to elect a new Primary.

A problem arises when the Primary copy fails. If the primary site is down a reassignment is in order. However, if the network has become partitioned a reassignment would compromise consistency. If network partitioning has a low probability of occurrence then the process for electing a new primary can be allowed: the secondaries should be notified of the primary's failure and they must agree amongst themselves which one is to become the new primary copy. If the election of a new primary takes place and the existing primary has not actually failed (perhaps it was not able to reply to the 'are you alive' probe-messages in time because of an overloaded node) then the protocol should ensure that the client will only accept a reply from the newly elected primary, as the old primary could be in an inconsistent state. The protocol described in [Alsberg 76] tolerates network partitions by allowing operations to continue in all partitioned segments and relying upon some "integration" protocol to merge the states of replica groups when the partitions are re-joined. However, such integration is not guaranteed to be resolvable.

If we take the case of only a fixed primary site i.e., no secondary takes over because partitioning is possible, then a resource replicated using this strategy only increases the read availability. Its write availability is the same as the availability of the primary site. The other replication strategies all provide ways of increasing write availability.


Regeneration [Pu 86] is a similar replication scheme to Available Copies in that a client only requires one replica to service a read request but a write must be performed by all copies. When an update occurs, if fewer than the required number of replicas are available then additional copies are regenerated on other operating nodes. In doing so the system must check that there is sufficient space available on the target node for the new replica (in terms of the volatile and stable storage that it may use) and also that a copy does not already reside there. A write failure occurs if it is not possible to update the correct number of replicas, and a read failure occurs if no replica is available.

A recovering node and replica cannot simply rejoin the system. For each replicated resource the system must check to see whether the maximum number of replicas already exist. If so the recovering replica is deleted. If the maximum number does not exist the system must check one of the available replicas to determine whether the state of the recovering replica is consistent. If the recovering replica is inconsistent (i.e. an update has occurred since this replica failed) then it must be deleted because a new copy has already been created to take its place but is currently unavailable. 

Weighted Voting

The Voting (or Quorum Consensus) replication protocol [Gifford 79] is a replication scheme which can operate correctly in the presence of network partitions. In this method a non—negative weight is assigned to each replica and this weight information is available to every client in the network. When a client wishes to read (write) the group it must first gain access to what is known as a read (write) quorum. A quorum is any set of replicas with (typically) more than half the total weight of all copies. A read quorum (Nr) and a write quorum (Nw) are defined such that Nr+Nw > N (the total weight).
A read operation requires the access of any Nr copies (only data from up—to—date replicas should be used), and a write operation requires N up—to—date copies (so updates are not applied to obsolete replicas). The number of inaccessible copies tolerated by a read is N-Nr, and for a write operation it is N—Nw. The purpose of having quora is to ensure that read and write operations have at least one copy of an object in common. If the network partitions then voting allows access only from the majority partition if one exists.

Associated with each replica is a timestamp or update number which clients can use to determine which replicas are up—to—date. If Nw ≠ N then a read quorum is required to obtain the most up-to-date version of this number. If Nw = N then every functioning copy must contain the same value because every write operation has been performed on every replica in the group. This update number is used by clients to ensure that they only read data from up-to-date replicas, even though they may acquire access to out-of-date replicas in their read quorum.

The write operation is a two-phase, atomic operation, because either the states of Nw copies are modified or none of them are changed (to ensure that subsequent read and write quorum overlap and that a majority of the replicas are consistent). If a write quorum cannot be obtained the transaction must be aborted. However, a separate transaction can be run to copy the state of a current replica to an out-of-date replica. It is always legal to copy the contents of replicas in this way.

The weights assigned to replicas should be based on their relative importance to the system e.g., a printer spooler which resides on a very fast node would be considered better for throughput than one which resides on a slower node and would therefore be assigned a higher weight than a replica on a slower node. A replica with higher weight is more likely to be in the quorum component.

A major problem with this protocol is that read operations require a quorum, even if there is a local copy of the object. This can prove inconvenient (and slow) if an object wishes to carry out many read operations on that object in a short space of time. There is also the problem of fault tolerance: many copies of an object must be created to be able to tolerate only a few failures e.g., we require five copies just to be able to tolerate two crashes. 

Note that to cut down on the storage requirements Witnesses [Paris 86] can be used in place of actual replicas. A witness only maintains the current version number of the data. They can take part in quora, but there must be at least one "real" replica in a quorum. Other replication protocols based on the Voting protocol also exist [Abbadi 90][Jajodia89][Davcev 89]. They address issues such as the need to acquire a quorum for a read the responses of "true" copies. A write can only succeed if a write quorum contains at least one non-ghost copy.

Because of the way in which ghosts are created and used, a ghost is used to ensure that a particular non-ghost copy has failed due to a node crash and not to a segment partition. If it is possible to create a ghost then the segment has not been partitioned and only the node has failed. In Available Copies when a replica does not respond it is simply assumed that it is because of node failure, which can result in inconsistencies if the copy was partitioned. When the node on which the non-ghost copy originally resided is re-booted it is possible to convert the ghost copy back into its live version. 

Friday, December 22, 2017

Available Copies

In the Available Copies [Bernstein 87] replication protocol, a user of a replicated service reads from one replica and writes to all available replicas. Prior to the execution of an action, each client determines how many replicas of the service there are available, and where they are, (this information may be stored in a naming service and is accessed before each atomic action is performed). Whenever a client detects a failure of a replica it must update the naming service (name-server) view of the replicated object by performing a delete operation for the failed copy. All copies of the name-server, if it too is replicated, must be updated atomically.

When a write operation is performed all copies are written to and they must all reply to this request within a specified time (it is assumed that it is always possible to communicate with non-faulty replicas). Locks must be acquired on all of the functioning replicas before the operations can be performed, and if conflicts between clients occur then some replicas will not be locked on behalf of a client, and the client will be informed, at which point the calling action is aborted. Using this locking policy and the serialisability property of the actions within which operations occur, it is possible to ensure that all replicas execute the operations in identical order. 

If all replicas reply to a write operation then the action may continue. However, if only a subset reply the action must ensure that the silent members have in fact failed, If the silent replicas subsequently reply then the action must abort and try again (this is because the states of the replicas may have diverged). However, if the silent copies have actually failed then the action can still commit since all available copies are in a consistent state.

Whenever a new copy is created (or recovers from a failure) it must be brought up-to-date before the name-server is informed of the recovery (before a client can make use of the replica). When this is done the copy can take requests from clients along with the other members of the group. The updating of recovered replicas can be done automatically if an out-of-date replica intercepts/receives a write request from a current transaction, as has been mentioned previously.

Consider the history of events shown in the diagram below, where Tand T2 are different transactions operating on two replica groups whose members are xl, x2 and yl, y2. Assume that T1 and T2 are using a "read—one copy, write—all—available copies" scheme and that there are initially two copies of objects x and y which they both wish to access. The execution of events is as shown, with time increasing down the y—axis.

If we examine the above history, it is clearly not 1SR i.e., neither the serial execution T1;T2 nor T2;T1 are consistent with the above history. Thus, the idea of "read—one copy, write—all—available copies" by itself cannot guarantee 1SR. It is necessary to execute a validation protocol before the transaction can commit to ensure correctness. In Available Copies this takes the form of ensuring that every copy that was accessed is still available at commit time, and every replica that was unavailable is still unavailable, otherwise the action must abort. 

Because of the assumption made by Available Copies that all functional replicas can
always be contacted, this means that this protocol cannot be used in the presence of network partitions. Anode which is partitioned cannot be distinguished from a failed node until it has been reconnected. If the replication protocol assumes that all nodes which are unavailable have failed when in fact some have only been partitioned, inconsistencies can result in the replicas. As such, if partitions can occur then the replication protocol must be sufficiently sophisticated that it can ensure consistent behaviour despite such failures. 

Data Replication in Atomic Action Systems

Data replication techniques for atomic action systems to maintain one-copy serialisability (1SR) have been extensively studied (most notably with regard to replicating databases). When designing a replication protocol it is natural to examine those protocols (and systems which use them) that already exist, to determine whether they have any relevance.

  • Definition: If the effect of a group of atomic actions executing on a replicated object is equivalent to running those same atomic actions on a single copy of the object then the overall execution is said to be 1SR.

  • Definition: A replica consistency protocol is one which ensures 1SR.

Because most replication protocols have been developed for use in database environments it is important to understand the differences between the way in which operations function in a database system and the way in which similar operations would function in an object-oriented environment. These differences are important as they affect the way in which the replication protocols function.

In a database system, which performs operations on data structures, a read operation is typically implemented as a "read entire data structure", and a write operation is in fact a "read entire data structure, update state locally to the invoker, then write entire new data state back". In this way, a single write operation can also update (or re-initialise) the state of an out-of-date data structure.

In an object-oriented system, the read operation is typically implemented as "read a specific data value". Similarly, the write is "perform some operation which will modify the state of the object". The object simply exports an interface with certain operations through which it is possible to manipulate the object state. Some of these operations may update the state of the object, whilst others will simply leave it unchanged. A write operation in this case may only modify a subset of an object's state, and so cannot be guaranteed to perform an update as in a database system.

In a database system, the fact that a single write operation can update the entire state of a replica is used in replication protocols such as Available Copies. If these protocols are to be used in an object-oriented system then they will require explicit update protocols.

Finally, in a database system the invoker of a given operation knows whether that operation is state modifying or not i.e., it knows which type of lock will be required. However, in an object-oriented system users of a given object only see the exported interface and see nothing of the implementation, and therefore do not know whether a given operation will modify the state of the object. This difference is important as many of the replication protocols to be described implicitly assume that clients have this type of knowledge (it is used to ensure that read operations can be executed faster than write operations).

In the next few entries we shall examine some of the replication protocols which have been proposed for managing replicated data.

Wednesday, December 20, 2017

Replication and Atomic Actions

Replication can be integrated into atomic actions in two ways: replicas within actions, and replicated actions. These two methods are substantially different from each other and yet attempt to solve the same problem: to ensure (with a high degree of probability) that an action can commit despite a finite number of component failures.

Replicas within Actions

Of the two methods of combining replication with atomic actions, this method is the most intuitive: each non-replicated object is replaced by a replica group. Whenever an action issues a request on the object this is translated into a request on the entire replica group (how the requests are interpreted depends upon the replica consistency protocol). Failures of replicas within a given replica group are handled by the replication protocol in an effort to mask them until a sufficient number of failures have occurred which makes masking impossible, and in this case the replica group fails. When an action comes to commit, the replication protocol dictates the minimum number of replicas which must be functioning in any given replica group for that group to be considered operational and allow the action to commit.

Problems which arise from this method have already been outlined in earlier entries. The replication protocol must be able to deal with multiple messages from both replicated client groups as well as replicated server groups; the majority of replication protocols assume that replicated objects within the same replica group are deterministic; problems can occur in maintaining consistency between replicas when local asynchronous events can occur such as RPC timeouts.

Replicated Actions 

The idea behind Replicated Actions [Ahamad 87][Ng 89] is to take an atomic action written for a non-replicated system and replicate the entire action to achieve availability. In this scheme the unit of replication is the action along with non-replicated objects used within it. In the Clouds distributed system [Ahamad 87] the replicated actions are called PETS (Parallel Execution Threads). By replicating the action N times (creating N PETS) and hence by also replicating the objects which the action uses, the probability of at least one action being able to commit successfully is increased. Although the actions and objects are replicated, each action only ever communicates with one replica from a given object-replica group and does not know about either the other replicas or the other actions until it comes to commit. Subsequently, if this replica fails, so too does the action (as is the case in a non-replicated system).

Because each action only uses one replica in a given group and each replica only receives messages from this one action, the replicas need not be deterministic, and there is no need to devise some protocol for dealing with multiple requests (or replies) from a client (server) group. When the replicated actions commit, it is necessary to ensure that only one action does so (to maintain consistency between the replicated objects). To do this, typically the first action to commit (the fastest) as part of its commit protocol copies the states of the objects it has used to those replicas it did not use (i.e. the replicas used by the other actions) and causes the other actions to abort. This is done atomically, so that if this atomic action should fail then the next replicated action which commits will proceed as though it had finished first.

The figure above shows the state of the objects A, B, C, and D which are replicated three
times, just as the replicated actions, which are also replicated three times, α, β, and ε, begin commit processing. The execution paths of these actions is indicated by the lines.
When actions α and β come to commit they will be unable to do so because the object D (which α used) and C (which β used) have failed, making commit impossible. However, action ε will be able to commit, and will then update the states of all remaining functioning replicas. Failed replicas must be updated before they can be reused.

Obviously the choice of which replicas the actions should use is important e.g. in
In the figure, if action ε had used the same copy of object D as action then the failure of this object would have caused the two actions to have failed instead of one. In some systems [Ahamad 87] the choice of which replica a particular action uses is made randomly because they make the assumption that with a sufficient number of object replicas the chances of two actions using the same copy are small. However, in [Ng 89] they propose a different approach by using information about the nature of the distributed system (reliability of nodes, probability of network partitions occurring) to come up with an optimal route for each action to take when using replicated objects. This route attempts to minimize the object overlap between replicated actions and minimize the number of different nodes that an action visits during the course of its execution.

The advantage of using replicated actions as opposed to using replicas within an action are the same advantages obtained from using a passive replication protocol as opposed to using an active replication protocol: there is no need to ensure that all copies of the same object are deterministic since the action which commits will impose its view of the state of the object on all other copies, and each replica will receive a reduced number of repeated requests from replicated clients (reduced to only one message if each action makes use of a different replica).
However, the scheme also suffers from the problems inherent with a passive replication scheme: checkpointing of state across a network can be a time consuming, expensive operation. If we assume failures to be rare (as would be the case) then this scheme will cause large numbers of actions to abort. The action which commits first will overwrite the states of the other replicas, effectively removing the knowledge that the other actions ever ran. In some applications it may prove more of an overhead to checkpoint the state of one action in this way rather than allowing all functioning actions simply to commit.

Monday, December 18, 2017

An overview of some existing distributed systems (of the time)

Existing distributed systems which make use of multicast communication can be categorised as using either one—to—many communication (one client interacting with a replica group) or many—to—many communication (replica groups interacting with replica groups). We shall now look at a number of the popular protocols and show how they have approached the problems of maintaining replica consistency to both external and internal events.

One-to-Many Communication 

The V System

The V System [Cheriton 84][Cheriton 85makes use of what they term one-to-many communication transactions, where the start of a transaction marks the end of the previous transaction (any outstanding messages associated with the old transaction are destroyed without being processed). Each group member has a finite message buffer into which replies are queued provided they arrive before the start of the next send transaction. Although a reliable communication protocol is not used, the assumption is made that by making use of retransmissions is it possible to ensure (with a high probability) that messages are delivered if the sender and receiver remain operational. A further assumption is that at least one operational group member receives and replies to a message.

The designers of the V System recognized the possibility of message loss due to message buffer overflow, and their solution was to make use of larger buffers and to modify the kernel to introduce a random delay when replying to a group send, thereby reducing the number of messages in a queue at any time. This imposes a consistent overhead on all group replies but still does not ensure that buffer overflow will never occur (if the number of different groups interacting with each other increases then even with this delay it is still probable that the finite sized message queues will overflow).

When servers reply to a client they do so on a many-to--one basis i.e., all servers which received a request from a particular client will attempt to reply to that client (and not to any replica group the client may be a member of). This means that there is the possibility of members of the same group taking different actions as a result of local events such as timeouts (resulting in state divergence), because the servers replied to some, but not all, clients on time.

The Andrew System 

The Andrew System [Satyanarayanan 90] assumes that the communication system is reliable in much the same manner as the V System i.e., given a sufficient number of retries then any operational server can always be contacted. Communication is one-to-many and the termination conditions for a particular client-server interaction is set on a per call basis (depending on the application). The designers recognized that calls between remote processes can take an indeterminate amount of time and so if a client times out it sends an "are you alive" message to any server replica which has not responded and if a reply is returned then it continues to wait. However, because servers reply on a many-to-one basis, state divergence between replicated clients can still occur: if only one client receives a reply sent before all servers fail, then the replicated clients may take different actions.

It was recognized that messages might be lost by processes because of message buffer overflow, but in Andrew it is assumed that such loses will be detected by retries. This implicitly relies on the servers keeping the last transmitted replies to every client, but could still result in state divergence if, for example, a server group is seen to have failed by one replicated client when it retries a request, when in fact all other members of the client group received the reply before the failure. 

Many-to-Many Communication 

The Circus System

The Circus System [Cooper 84b] (and Multi-Functions [Banatre 86b] in Gothic [Banatre 86a]) makes use of many-to-many communication, with both client and server groups. As with the V System they assume that with a sufficient number of transmission retries it is possible to deliver messages to all operational members of a group. Thus, if a call eventually times out (after retrying a sufficient number of times) a sender can assume that the receiver(s) has failed. Each member of a group has a message queue which is assumed to be infinite in size (if this assumption cannot be made then the designers of Circus assume that retransmissions of requests and replies can account for losses due to buffer overflow).

Throughout the description of Circus the assumption is made that the members of a group are detenninistic (given the same set of messages in the same order and the same starting state then they will all arrive at the same next state). Client groups' members operate as though they were not replicated and do not know that they are members of a replica group. No mention is made of how this determinism of group members extends to asynchronous local events such as timeouts.

Despite the assumptions made by the designers, both the problem of message buffer overflow, and local timeouts, can occur in the Circus system, leading to replica state divergence.  


Following on from the last entry I did the initial implementation of red/REL as part of my PhD and that document includes an overview of the protocol as well as some performance figures and proposed optimisations to the protocol. For now I won't cover that here as there's a good paper in the previous reference to read, or the PhD of course. However, maybe I'll come back to this later.