Monday, November 09, 2009

In-memory durability and HPTS

Back in the 1980's when I was writing the proposal for my PhD work I was looking at various uses for replication (at that point strong consistency protocols). There are a number of reasons for replicating data or an object, including high-availability, fault tolerance through design diversity and improving application performance. For the latter this could include reading data from a physically closer replica, or one that resides on a faster machine available through a faster network path.

But in terms of how replication and transactions could play well together it was using replicas as "fast backing store" aka a highly available in-memory log that seemed the logical thing to concentrate on. We certainly had success in this approach, but the general idea of replication for in-memory durability didn't really seem to take off within the industry until relatively recently. I think one of the important reasons for this is that improvements in network speeds and faster processors have continued to outstrip disk performance, making these kinds of optimization less academic and more mainstream. So it was with a lot of interest that I listened to presentation after presentation at this year's HPTS about this approach. Of course there were presentations on improving disk speeds and using flash drives as a second-level cache too, so it was a good workshop all round.

4 comments:

  1. I was thinking the other day what would happen in the (probably likely) event that volatile "D" ram style memory goes away in future, and all memory is non volatile (for power cycles).
    This is the way it was some time ago, and if it happens again, what would change in terms of architecture if nothing suddenly disappears (at least due to power) ?
    I guess what we are left with is a hierarchy of memory speed/cost/distance from the core processors, so most things still apply.

    ReplyDelete
  2. Yes and that was the subject of at least one of the flash-ram talks at HPTS. Once you get to that level you continue to try to optimize the heck out of the system. So as you say, the same concerns apply.

    ReplyDelete
  3. Mark, you might be interested to know my company will soon be launching a high performance transaction/ORM product using the idea of replication for in-memory durability of the transaction log. We currently do thousands of transactions per second with a single CPU manager box ... scalable, of course.

    btw, the link to HPTS is mis-spelt as "htps.ws"

    ReplyDelete
  4. Thanks Matthew. I fixed the typo.

    Good to know that another user has entered the arena with us.

    ReplyDelete