Optimize Performance for Virtualized and Data-Intensive Applications
Flash as a cache satisfies the I/O needs of virtualized and physical applications and minimizes latency associated with traversing the network to satisfy I/O from NAS or SAN. Performance can be targeted on a per-VM, file, volume, or disk basis to ensure performance is consistently delivered to the virtualized applications that need it.
Support vMotion to Preserve IT Infrastructure Agility
With ioCache there is no need to sacrifice performance for agility in your VMware environment. ioCache fully supports dynamic vMotion so administrators can move VMs at will—and without requiring ioMemory to be installed in every physical server. The ioTurbine software dynamically allocates cache as VMs come and go, to make the best use of cache capacity. ioTurbine software supports setting cache on a per-VM basis to protect performance levels of mission critical and business critical VMs. With ioTurbine, users can load balance across servers dynamically with VMware Dynamic Resource Scheduling (DRS) and can schedule maintenance with no downtime using vMotion—and without the need to overprovision cache on each participating host.
Integrate Transparently with Existing Infrastructure
The ioTurbine caching software supports Windows, Linux, and VMware environments, solving the random IO blender effect of virtualization, or caching within each or all guest OS VMs, for file-level aware acceleration. By tightly coupling with the file system I/O routines in the guest OS, I/O patterns are transparently redirected so flash storage is efficiently shared across all hosted VMs. No configuration changes are needed in the guest OS, host, storage, or networking equipment.
Consolidate Software Licenses and Infrastructure
Satisfying IOPS from flash increases workload per server, further driving consolidation and increasing ROI. For customers licensing enterprise software by CPU or core, consolidation of more applications on fewer servers offers immediate and tangible cost savings.
Offload Shared Storage
Satisfying read IOPS from flash as a cache frees up shared primary storage controllers to better service persistent writes. This complementary effect of flash as a cache allows users to increase the useful life of shared storage infrastructure and rebalance the processing load from expensive storage to cost-effective servers.
|Bundled software||ioTurbine Virtual, ioSphere|
|Cache mode||Write through|
|Form factor||Low profile PCI Express x4 slot|
|Connectivity/Power||PCI Express 2.0 x4|
|ioCache||750GB MLC NAND Flash|
|Read Bandwidth (1MB)||1,400 MB/s|
|Write Bandwidth (1MB)||1,100 MB/s|
|Rand. Read IOPS (512B)||135,000|
|Rand. Write IOPS (512B)||535,000|
|Rand. Read IOPS (4K)||130,000|
|Rand. Write IOPS (4K)||235,000|
|Read Access Latency||77µs|
|Write Access Latency||19µs|
|Bus Interface||PCI-Express 2.0 x4|
|Form Factor||Half-height, half-length|
|Warranty||5 years or maximum endurance used|
|ioTurbine System Requirements|
|Guest Operating Systems||
64-bit Windows Server 2008 R2, Windows Server 2008
32- and 64-bit Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 SP2
64-bit Red Hat Enterprise Linux (RHEL) 5.6, 5.7, 5.8, 6.1, 6.2, 6.3
64-bit SUSE Linux Enterprise Server (SLES) 11 SP1
|Hypervisor||VMware ESX 4.0 Update 2 and Update 3, ESX 4.1, ESXi 4.1, ESXi 5.0, ESXi 5.1|
|Supported external storage||SAN or NFS-based storage|