Data Center Tech Blog
Nisha Talagala

SanDisk Fellow, Advanced Technology Group

In the past several years our industry has seen the emergence of the “Software-Defined” concept. Where previously resource management, policy, and data management were embedded within hardware products, both Software-Defined Networking and Software-Defined Storage advocated for policies of management to be separated from the hardware components themselves, enabling software to then coordinate access to a variety of hardware capabilities in a manner that is optimal for the applications and the data center itself. As data centers scale to hundreds of thousands of machines and multiple geographies, software-defined technologies help users and administrators to harness available resources in a holistic but dynamic manner across massive scale.

While the software-defined approach was being developed for network and storage technologies, the underlying memory technologies in the data center have been undergoing a transformation of their own.  Flash has already transformed the data center, improving application performance and reducing infrastructure costs through greater server consolidation. As flash pushes further into lower cost (and in some cases lower endurance), the flash tier itself is bifurcating as different flash products based on MLC (Multi-Layer Cell) and TLC (Triple Layer Cell) are driven towards write heavy or read heavy usages and architected into forms of hybrid storage for mixed usages.  Also, the role of DRAM is changing, with hybrids of DRAM/Flash being used to offset requirements for more expensive DRAM. Finally, with NVDIMMs and the promise of future technologies such as ReRAM or Phase Change Memory, we are seeing excitement build for a new class of memory; persistent memory, which has the persistence capabilities of storage and access performance similar to memory.

Given this richness of media technologies, we now have the ability to create systems and data center solutions which combine a variety of memory types to accelerate applications, reduce power, improve server consolidation, and more.  We believe these trends will drive a new set of software abstractions for these systems which will emerge as software-defined memory – a software driven approach to optimizing memory of all types in the data center.

We at SanDisk® have developed a suite of software technologies that demonstrate the power of software-defined memory.  SanDisk’s software-defined memory includes our Non Volatile Memory File System (NVMFS) and our Auto-Commit Memory (ACM) software and hardware for byte addressable persistent memory.  Together, NVMFS and ACM combine to tier multiple memory sources, from Flash to Persistent Memory, with both transparent acceleration for legacy applications and optimal integration interfaces for optimized applications. We will be walking visitors through these software-defined memory technologies and concepts at Oracle OpenWorld this week, and look forward to future advances in the industry to drive this category of memory optimized solutions.

Find SanDisk during Oracle OpenWorld 2014 in booth #1429 and at the Oracle Linux and Virtualization Showcase in kiosk (SLX-020). Follow the conversation around Software-defined Memory by following SanDisk on Twitter and LinkedIn.

 

See the full presentation on Software-Defined Memory and Optimizing Oracle MySQL here:

 

 

Partners In Performance: Dell & SanDisk® Combine to Accelerate Oracle Databases

The Three Pillars of the New Open Source World

subscribe blog