It’s easy to copy application binaries over long distances and to change DNS entries, but what about application state? How do you move data between your old school virtual machines running in your private datacenter and your new-school containers running in a bare-metal service provider? These are a few of the problems that we have been working on.

One of the core features introduced in Anti-gravity is hybrid-cloud data mobility. The underlying technology is called Object Exchange (“OX”). OX integrates Elastic Block Storage (“EBS”) with any S3 compatible object storage provider. OX allows you to snapshot volumes from block storage directly into object storage. And, OX allows you to instantly clone volumes from object storage, copying data on-demand without waiting for a full volume transfer. OX addresses a number of fundamental mobility issues including backup, disaster tolerance and migration of elastic block storage.


When we set out to build OX, we spent a lot of time discussing requirements as well as evaluating existing solutions.

An important consideration is client impact. Cryptography and compression consume significant CPU cycles. Snapshots, metadata and buffering consume significant memory. Transfer of data itself requires network bandwidth. It was clear from the start that we could not impose resource requirements on the client. Otherwise, we would need to explain unexpected application performance degradation. Worse yet, clients would need to overprovision resources to account for it.

We also considered interoperability. It should not matter what application or operating system is running on the client (i.e., VMware, Windows, Solaris or any flavor of Linux). And, the solution must support any filesystem or raw device application. Can you imagine if we told you that you had to run a specific filesystem on the client?!

Security was also an important consideration. Encryption and integrity validation are basic requirements for any solution. However, we also needed to tackle authentication, key management and auditing. We also desired a solution that did not require the clients to have access to public networks.

Last and definitely not least, we had to consider performance. The idea of having to transfer an entire backup before accessing the data seemed to be an arbitrary constraint and decidedly old-school. So, we designed a solution that would allows us to fetch whatever data we need, on demand.

Block / Object Virtualization:

OX is a new type of virtualization engine that translates between block devices and object storage devices. From a user perspective, block devices are essentially arrays of fixed-sized sectors, generally 512 or 4096 bytes. Object Storage Devices store variable-sized blobs of data that are addressed by arbitrary keys.

How does it work?

In the most basic sense, OX has 2 primitive functions: encode and decode. Encode packs sectors from a block device and into an object. Then, at some configurable size threshold, the object is sealed and PUT to an object storage device. The decode function performs the reverse: it locates the object that contains a sector, GETs it, and then unpacks it. OX is like a key/value store that is distributed across an arbitrary number of objects. To round out the feature set for hybrid cloud, we’ve added compression, at-rest encryption, in-flight encryption, bandwidth throttling, key management and integrity checking.

How is it integrated?

To achieve the desired operational characteristics, we’ve integrated OX into the micro-segmentation engine of our dataplane, enabling a number of important features. The primary goal of the low-level integration was the ability to create cloned virtual disks from object storage that are readable and writable. A secondary goal was to prevent performance impact on non-object related storage operations.

Object Basis for a cloned disk:

A cloned virtual disk can be visualized as having two layers.  The top layer stores whatever data is written to the disk and the lower layer (aka, “basis”) contains references to read-only data stored elsewhere. If you were to read from a cloned disk, without having written to it, all data would be read from the basis. The product of our OX integration is that the basis of a cloned disk can be a set of objects that are stored in any S3 accessible object storage device. It does not matter if the object storage provider is local or remote.


Consider a filesystem containing media files or a database that you have backed up with OX. If you want to access any of the content, you can create a clone and immediately mount it on any machine running in any provider. If you reference filesystem content that is currently non-local, the dataplane will transparently invoke OX to fetch, unpack, decrypt, decompress and re-segment the data blocks, on demand. Concurrently, the dataplane will pull data from the object storage device to “thicken” the cloned disk. This occurs in the background and obeys bandwidth policies that can be specified by the user. Data explicitly requested by a client is prioritized over background transfers. So, if you need immediate access to a 10MiB file from a 1TiB filesystem, you only need to transfer about 10MiB from object storage (the amount of transferred data is often lower than 10MiB due to compression).

What are the use cases?

In hybrid cloud, OX allows you to freely move data between service providers, co-location facilities and on-premise infrastructure. It fulfills disaster tolerance requirements via backup and provides a recovery time objective (RTO) of zero. It does so in an application, hardware and provider independent manner.

In elastic block storage infrastructure, OX provides essential backup functionality, enables fast machine provisioning (virtual and bare-metal) and facilitates volume mobility across racks and tiers of storage.

And more…

Stay tuned… more OX related news to come…