Data Center Tech Blog

The data center is coming apart – and then it’s being put back together in a new way.

That was my takeaway after listening to many hours of presentations at last week’s Open Server Summit, a conference that took place at the Santa Clara Convention Center Nov. 11-Nov. 13.

Open Server Summit logo

Focused on the server and storage industries, the conference’s keynotes and breakouts showed the clear links between server design, and the need to support fast-emerging and more demanding new workloads.

Some of the leading technologies that are helping enable meeting these demands for the new-wave data center include:

  • Disaggregated building-blocks, tied together by high-speed fabrics and high-speed switches
  • Optical (light-driven) connectors linking high-speed processing with high-speed storage
  • Hyperconverged systems, combining servers, storage and networking for faster performance
  • “Flat networks” linking sections of the data infrastructure fabric
  • More, and better, software management – for policy-enforcement, orchestration and automation.
  • Fewer and fewer isolated ‘islands of automation”

This will not happen immediately, or even completely.

Instead, it will be driven by the process of technology refresh – and the process of outsourcing applications and databases to hosters and service providers for ongoing production and maintenance.

This thinking is in synch with what many market researchers have been saying, including Gartner’s Nexus of Forces, and IDC’s Third Platform. A new layer of technologies is being introduced into the data centers of the world to deal with the megatrends of Cloud Computing, Big Data Analytics, Mobility and Social Media. A new infrastructure is emerging to meet traditional IT needs – and to add new functionality for new workloads. This is how web-scale and hyperscale technologies will work their way into traditional datacenters, over time – and into cloud data centers more immediately.

Server Design Is Morphing

Server design is evolving, or “morphing,” to bring the technology pieces closer together, for greater performance. Processors, storage and networking, which used to be viewed as separate components, are now seen as vital ingredients to more integrated, unified systems. This approach is in synch with computer science, which has long shown that distances within the server must get shorter and the communications must get faster in order to reach the next step of high throughput computing.

Much of this is driven by advancements in materials technology – the optical connectors, and multi-core processors, and denser data devices could only be envisioned 20 years ago – but not implemented. Today, these materials are installed inside shipping products, and are being adopted, especially for high-performance and mission-critical workloads.

Sound Bites

Many of these concepts were outlined in the Open Server Summit keynotes. Here are highlights from some of the keynote speakers:

  • Jim Pinkerton of Microsoft: Hyperscale Cloud Solutions Are Right for Enterprise Clouds, Too. Pinkerton, who is partner architect lead at Microsoft, working on Microsoft Windows Azure cloud, provided a snapshot of technology innovation that has taken place within the Azure cloud – where cloud services support hyperscale workloads alongside enterprise workloads (e.g., Microsoft SQL Server, Microsoft Exchange). Many of the key concepts driving this infrastructure, and the technology within it, were informed by Microsoft’s Research group, leading to the flat networks and fabric interconnects it has today. Variable workloads are inevitable in this kind of environment, which is why scalable infrastructure is so important to providing elastic computing capabilities.
  • Raejeanne Skillern of Intel: Digital Services Drive the Move to Software-Defined Infrastructure (SDI). Skillern, who is general manager of Intel’s Cloud service Provider business, spoke about Software Defined infrastructure—and the way it is working to provide virtualized pools of compute, storage and networking resources. This is being done to make sure that data sevrices will scale, as needed, based on user demand for those data services. Dynamic orchestration of processing and a high degree of automation are both key enablers those this kind of software-defined infrastructure in next-generation data centers.
  • Jian Li of Huawei: Developing the High Throughput Computing Data Center. Li, who spoke about “Developing the High Throughput Data Center” is a Research Director at Huawei and a chief architect. He described a High Throughput Computing Data Center (HTC-DC) said that supports higher throughput, better resource utilization, greater manageability and more efficient power usage. Big Data workloads demand this kind of infrastructure, he said, he said – and that is why software-defined networks are so important in leveraging the resources already inside the data center to achieve workload scalability.

SanDisk®’s Activities at Open Server Summit

Highlights of SanDisk®’s participation in the Open Server Summit included the following:

Open_Server_Summit 2014 Speaker Hemant Gaidhani

Hemant Gaidhani, SanDisk Director of Product Marketing and Management spoke about ultra-low latency, and SanDisk’s ULLtraDIMM as it is used in x86 servers with Atlantis Computing Inc’s storage-management software. His talk was part of a panel breakout session entitled: “Unleash Storage Performance with Flash on the Memory Bus.”

The Cloud Server Design panel explored new form-factors and data-density in servers, as they evolve into denser packaging and smaller form-factors. Panelists included: Brian Payne, Executive Director Platform Marketing at Dell; Allen Samuels, SanDisk Engineering Fellow; Sev Onyshkevych, CMO of FieldView Solutions of Edison, N.J., a software management framework company; and Ben Woo, Managing Director of market-research firm Neuralytix in new York.

Open_Server_Summit_Jean_Bozman

The Future of Open Servers and Open Storage panel provided a wide-ranging discussion of changing assumptions about how servers will support the workloads of 2020, and beyond. I spoke on this panel, along with Anil Vasudeva, President of IMEX Research; Amit Sanyal, Senior Director of Product Management/Technical Marketing at Dell, and Akber Kazmi, Senior Product Line Manager at Avago Technologies.

Flash’s Important Role in Data Center Technology Refresh

After three days of keynotes, presentations and discussions at the conference, it was very clear to me that flash is a key ingredient in this makeover of the traditional data center.

  • Flash is being leveraged in hyperconverged systems, servers, storage devices and all-flash storage arrays for higher performance for applications and databases.
  • Edge servers, including many with flash storage, are also relaying content to “depots” across the Internet, for faster and smoother transmission to end-users.
  • Enterprise data-center managers, especially those from Wall Street firms, are telling us that they are already leveraging flash for online transaction processing. Flash has further key benefits in speeding up Big Data Analytics and in supporting higher volumes of virtual desktops.
  • Cloud service providers are telling us that “sleds” of flash that are being added to racks of servers, just to handle the data volume generated by their data services.

All of these are pieces of the overall infrastructure build-out that requires data centers to provide more data, more quickly, than ever before. That’s why, as data centers “morph” before our eyes, we’ll see an expanding role for flash-enabled systems.

Fusion ioMemory from SanDisk® boosts Oracle VM performance

Connecting the Data Bytes: IT Professionals Chime in on Flash in the Data Center

Subscribe Today!

Get our latest posts via email and enjoy data center insights in a flash