One Billion IOPS: Auto Commit Memory Blurs the Line Between Enterprise Storage and Memory

Posted: 05 Jan 2012   By: Nisha Talagala
One Billion IOPS: Auto Commit Memory Blurs the Line Between Enterprise Storage and Memory

Today, Fusion-io previewed a new technology, Auto Commit Memory (ACM). We are very excited about the impact this technology could have on enterprise applications, so I would like to share some details about ACM, how it can be used, and some perspective on the change that it brings.

When flash based non-volatile memory first emerged in the enterprise domain, there was a clear separation between memory and disk. Memory was “fast”, accessed directly via CPU load and store semantics that bypassed the OS, and was volatile. Storage (or disk) was “slow”, accessed via the operating system through system calls such as read and write, and was persistent. Applications brought their data in from storage, operated on it in memory, and put the result back in storage when finished.

Non-volatile memory (NVM) first integrated as a fast disk. By replacing spinning disks at 100s of IOPS with flash devices delivering 100Ks to millions of IOPS, NAND flash has been able to dramatically accelerate enterprise applications ranging from databases, caches, and virtualization, to search engines and messaging services. However – the programming model remained as storage, with access to NVM commonly being via system calls such as read and write.

ACM enables the next stage in both performance and programming model for non-volatile memory.  ACM is a new memory type that uses the underlying flash to present a persistent memory directly to applications. At DEMO Enterprise, we showed a billion 64 byte ops/s operation rate, which is more than 100x greater than what has been showcased to date with flash as a fast disk. With ACM, applications can declare a region of virtual address space as persistent. The app can then program to this region using regular memory semantics such as pointer dereferences, and access the memory directly via CPU load and store operations that bypass the operating system. Since ACM bypasses OS overheads, latency is significantly reduced.  ACM works in cooperation with the existing platform hardware and CPU memory hierarchy, to apply the memory updates and ensure data persistence in the event of a failure or restart.

The separation between storage and memory has been a foundational part of software and operating systems architecture for the past 20 years.  Flash as fast disk was the first to blur this distinction. We see the effects already in software, with flash based caching tiers fast becoming the norm in both servers and storage. With ACM we create the next step in the blurring of storage and memory – a tier of direct access persistent memory – which will further change what “memory” and “storage” mean in enterprise architectures.

We expect initial usages of ACM to be for logging acceleration in databases and file systems, where fast commit is critical for performance. We will be working with application developers to explore other new opportunities that this technology brings.

Read the press release here.

For more details, check out the data sheet here.

Nisha Talagala

Lead Architect, Engineering Software
Other posts by Nisha Talagala >