When you think of Lawrence Livermore National Labs (LLNL), chances are you think of supercomputers that take up acres of datacenter space. This type of large-scale supercomputing exists at many labs all over the world, but now LLNL is taking a new approach to solving large scale problems that is easier both on mother earth and on shrinking federal budgets.
LLNL displayed this new approach for the first time in the June Graph500 competition. In June, LLNL’s “Kraken” ranked number seven of the top 500 supercomputers in the world. What's remarkable about LLNL's Kraken result was that it was done with a single four socket server. While other systems that ranked above and below had hundreds and thousands of nodes, Kraken delivered a scale 34 (its rating) from a single node. In its latest submission, LLNL's new dynamo, called “Leviathan," delivers 68,719,000,000 nodes (known as “scale 36” meaning 2^36 nodes) in a four socket Intel server – four times the size previously attained. Leviathan can traverse 52.796 million edges per second. The problem scale rivals the June submission of “Franklin,” a 4000 node Cray system at NERSC.
So, what’s the secret? How does a graph of that scale get processed on a single node? While most supercomputers will distribute the graph into DRAM on hundreds or thousands of nodes, Leviathan relies heavily on a memory that is 10x more dense than DRAM. By using ioMemory (Fusion), Leviathan packed 12TiBs of NAND flash into a single server. The superior algorithm developed by LLNL's Dr. Maya Gokhale and PhD Student Roger Pearce, combined with the high capacity, low latency ioMemory, allows the large graph to be processed on a system that is orders of magnitude cheaper and denser than the alternative methods. Imagine the initial investment cost savings of a multi-hundred node supercomputer compared to a single four socket server loaded with ioMemory. A supercomputer can easily tip the scales with a multimillion dollar price tag, whereas a single server would cost just over $200,000. Then consider the yearly power and cooling cost difference between the two systems. Then consider if every scientist had access to a Leviathan at their desk and how much more productive they could be.
While the rest of the world is waking up to using NAND flash in the enterprise datacenter, LLNL gives us a glimpse into what could be the next revolution–ioMemory supplanting DRAM vice primary storage. It also displays what many other Fusion-io customers have already experienced–dramatic server consolidation and increased performance when hot data can be loaded onto ioMemory.