Data Center Tech Blog

Web-scale computing is deeply established at the world’s largest cloud service providers – Google, Facebook, Amazon, Microsoft and others. But can hyperscale computing, so well known for delivering cloud services, be a model for enterprise computing, too?

Analysts speaking at the Gartner Data Center Conference (Las Vegas, Dec. 9-12)* provided a view on web-scale computing deployments – and what they would mean to enterprise data centers. Two speakers, Carl Claunch, Gartner research vice president and distinguished analyst, and Arun Chandrasekaran, Gartner research director, made the point that web-scale computing technologies are already being applied to enterprise computing – and that this will increasingly be the case in coming years.

arrow green upScaling Up with Scale-out Architecture

Web-scale computing emphasizes scale-out, over scale-up, server nodes – and it relies on an architecture that is built on standards, APIs and a high degree of abstraction that allows workloads to run on top of a standardized platform. Importantly, it allows datacenters to scale up workloads without scaling up individual servers, because computing resources throughout the infrastructure are leveraged, as needed, using orchestration software. At the same time, enterprises will increasingly be leveraging self-built systems and storage, based on volume components, to support the demand to scale-up the enterprise infrastructure for cloud-style computing.

Certainly, the lessons of web-scale computing could be applied immediately to enterprise infrastructure: In any large deployment of web-scale servers (many thousands of servers), something is always broken, which could cause disruption if not properly addressed.

Claunch recommended the following to reduce operational disruption, in his presentation*:

• Measure and improve everything done in volume
• Custom tooling and lean processes
• Cut down on causes, risks of error
• Choose (small) set of building blocks
• Specify machinery in great detail
• Payoff comes in multiple ways: Cost Reduction, Speed, Quality and Changes “it’s not possible” to “achievable”

Commentary

At any given time, one or more servers are likely to have a failing component or faulty software. However, Web-scale computing anticipates that – and addresses it — through rapid provisioning, automation and orchestration that moves workloads to appropriate resources, as needed.

This software dynamism is enabled by a new style of web-oriented application development. There must be a capability for auto-discovery, as well as a capacity to address any faulty portions in the infrastructure. This software senses the changing needs of the data center, and adapts.

webscale cloud data centerProtecting the Applications – and Designing for Change

But there is more: Duplication and redundancy, supported by deployments of multiple physical machines or virtual machines – Ps or Vs – ensures business continuity in the face of disruptions that are caused by component failures. Multiple power feeds into the data center are another prescription for business continuity, as are multiple networking links into – and out of—the data center.

Beyond that, DevOps – or the development style for new applications, should be done with web-scale computing in mind. Key ingredients will include: standardized hardware and standardized software APIs, leveraging of open-source OpenStack software and OpenCompute specifications – and software-defined networking (SDN). Worldwide, the open-source software ecosystem is growing, and open source software is expected to see wide adoption for web-scale computing.

In short, enterprise IT should take several key lessons from web-scale data centers that have, to date, focused more on search and personal data than on enterprise business workloads. Among these: that enterprise IT should anticipate scale barriers. These include scaling up by growing: That means growing the number of users supported; the number of files supported; the raw number of processor cores and the number of terabytes (TB) of data – and doing so by providing enough infrastructure capacity for processing, storage and networking.

SanDIsk SSD reliabilityWebscale Computing and Storage

Gartner analyst Arun Chandrasekaran noted in his presentation that webscale computing could reduce acquisition and operational costs, reduce vendor lock-in (over-dependence on any single vendor) — and provide more IT flexibility and business agility. He said in his presentation*, “Self-build storage products can lower by up to 30% the TCO when compared to traditional products.”

The benefits of this approach, he said in his conference presentation*, include the following:
• Lower acquisition costs
• Freedom from lock-in
• Standardization
• Automation and operational efficiency
• Agile and versatile infrastructure
• Differentiated services

Commentary on How to Get There

To achieve this, enterprise data centers should consider adopting standards for scale-out storage, which could be applied to multiple vendors in RFPs. Looking at shared-nothing storage as an alternative to shared-everything storage, where applicable, is an alternative for enterprise IT. Of course, to adopt shared-nothing computing, software would need to be changed, to identify and to access available resources. Further, enterprise IT teams should look at leveraging DevOps teams to generate a new set of web-scale applications that can run across servers and storage in scale-out computing, given a high degree of abstraction from the underlying hardware. These DevOps teams would have cross-functional skill-sets to address end-to-end applications than span the enterprise. Finally, enterprise IT should consider new “cold storage” models for archiving of infrequently accessed data, similar to the model that Facebook has developed for storing infrequently accessed photos.

NANDCommentary: Flash as a Key Component

Given this background, it is clear that flash memory will play an important role in this kind of scale-out computing environment, for several reasons:

  • Flash can process large datasets, associated with Big Data, analytics and transactional processing.
  • Flash makes caching of those large datasets (multiple TB) faster, and more efficient.
  • Flash provides local data-stores for distributed data, which will become more prevalent in the age of web-scale computing.
  • Flash provides non-volatile memory, which preserves data even when there is a power outage.

We note here several industry examples of flash-enabled storage deployments that leverage flash across multiple web-scale tiers. One is Facebook’s “sleds” of flash-based storage for archiving photographs that are rarely accessed. Another is VMware’s virtual SAN (VSAN), which leverages SSDs to provision VMs, as needed, under VMware vSphere 5.5. (NOTE: SanDisk® SAS SSDs were recently validated and certified for use in VMware VSAN deployments. This capability was demonstrated at the conference at the Symantec exhibit-hall booth, where Storage Foundation 6.1’s VSAN support was shown).

Flash memory adoption in the datacenter will continue to increase in coming years, leading to what SanDisk is calling All-Flash Data Center. This means that adoption, across servers and storage, will grow rapidly – enabling web-scale computing, cloud computing, high performance computing (HPC) and analytics.

Summary: We’re On the Path to Hyperscale-Style Enterprise IT

Born in the hyperscale data center, web-scale computing architectures will be increasingly applied to enterprise IT data centers. This will happen as old infrastructure is replaced, giving way to new, more modular infrastructure that scales workloads by aggregating more resources to support those growing workloads. Things that were once done only in scale-up, vendor-specific servers are now being done in scale-out, shared-nothing architectures – as long as the software environment has the ability to adapt to the new scale-out infrastructure design principles.

This is a new way of thinking about infrastructure in the enterprise – and it will take years before it becomes the predominant style of building out new enterprise data centers.

Large enterprise data centers already have the skill-sets needed to apply web-scale technologies to their infrastructure – and they are expected to adopt this web-scale mode of computing near-term for a variety of workloads. Meanwhile, for SMB sites and smaller data centers, some web-scale workloads may shift to hosters, and to cloud service providers – avoiding a rip-and-replace technology upgrade.

Going forward, IT managers will increasingly find themselves in the role of “brokers” of web-enabled, cloud-enabled data services – some delivered on-prem (within the enterprise data center)– and others delivered off-prem, at service provider (SP) and cloud service provider (CSP) sites. But, as the ideas of web-scale computing become more firmly established among enterprise IT data center managers, and as flash technologies gain adoption in servers and storage devices, the presence of these web-scale deployments will grow steadily – becoming the foundation of many next-generation data center build-outs.

– Jean S. Bozman

*Gartner, Data Center Conference Presentations, December 2013, Design and Architecture of Web-Scale IT Systems, by Carl Claunch; Building Your Own Storage: Can You Mimic the Big Cloud Providers? by Arun Chandrasekaran.

Enabling the All Flash Data Center - SanDisk® Enterprise at Storage Visions 2014

Highlights From Our Customer Panel at Gartner Data Center Conference

Subscribe Today!

Get our latest posts via email and enjoy data center insights in a flash