Last week’s Open Compute Summit was a major milestone along the way to next-generation hardware designs. I’ve been following the Open Compute Project (OCP) from its start in 2011. A few years ago, OCP’s focus was on Open Rack designs and the “Group Hug” system board –both of which were deployed at Facebook’s datacenter in Prineville, Oregon. Later, these designs were also implemented in datacenters around the world.
Today, I’m glad to see that there are OCP projects focused on networking equipment and cold storage for archiving – building out an open hardware model that increases innovation and reduces costs.
Certainly the “right” people are involved in accelerating the momentum: OCP founders Facebook, Goldman Sachs, Fidelity and Rackspace. The progress from last year’s January meeting was clear. This event broke new ground in bringing more “critical mass” to the OCP conference, with 3,500+ attendees. Microsoft and IBM have joined the organization, bringing their mighty industry weight with them. During the “main tent” keynotes, IT superstars who drove the “waves of change” for Internet and the Cloud in the 1990s and 2000s – including Andy Bechtolsheim and Marc Andreessen –joined today’s Web and cloud superstars, including Facebook CEO Mark Zuckerberg.
OCP is all about “open hardware” – bringing the principles of open-source software – community and innovation – to the world of hardware devices. At OCP, we saw open designs from server vendors, storage vendors, processor manufacturers and makers of switch-devices. Here’s the link where you’ll find the keynotes and an overview of the OCP community projects: OCP Summit
OCP’s Impact on FB’s Datacenters: Mark Zuckerberg’s Keynote Address
Open hardware has already paid off for Facebook, said CEO Mark Zuckerberg, whose appearance was somewhat of a surprise for the techies sitting in the front row of the conference hall. “In the last three years alone,” Zuckerberg said, “Facebook has saved more than $1 billion building out our infrastructure using Open Compute designs.” (You can watch his entire presentation on YouTube.) That was true in Facebook’s Prineville, Ore., datacenter – giving rise to an open style of hardware development that has since spread to Facebook’s datacenters in northern Europe and Asia/Pacific.
The OCP-compliant servers operate at ambient temperatures, reducing power/cooling costs – creating a new ecosystem that can supply servers and storage to Facebook, as it rapidly expand its datacenter infrastructure. The same model is also taking hold at other large organizations that build their own datacenters – such as Goldman Sachs and Rackspace, which hosts cloud services. These companies, which are also OCP members, are building out web-scale architecture, with a scope that needs cost-competitive technology.
Why is Open Computing so impactful in hardware? In short, it’s because it will shorten product cycles, save datacenter space, power and cooling – and broaden the ecosystem that drives innovation in computing. Increasingly, we’re seeing original design manufacturers (ODMs) such as Quanta and Wiwynn, in addition to established original equipment manufacturers (OEMs)such as Dell and HP)leverage OCP to grow market share. The result is increased competition, and faster product cycles.
Datacenters Getting Flash-ier
OCP designs, such as Open Rack, house a disaggregated collection of system products – including flash drives that can be packaged with servers and storage arrays. Faster interconnects, as shown at OCP, will make aggregate data rates many times faster than they are today. It’s clear that these developments will grow flash adoption across the tiers of the next-generation datacenter.
Flash technologies for servers and storage are gaining adoption in enterprise and cloud datacenters.
The megatrends of Big Data/ Analytics and Cloud Computing are driving more data traffic into the datacenter than ever before. All of that data must be transferred and stored quickly. OCP designs will broaden the array of servers and storage products that leverage flash in the datacenter.
Flash drives have high performance, which supports high data transfer rates, and store that data in non-volatile memory that has no moving parts, preserving data integrity. Flash will have the most profound impact on accelerating data-intensive workloads, such as Big Data, Analytics, High Performance Computing (HPC), Financial Applications, and more. Innovations in packaging flash for use in servers and in all-flash storage arrays are giving flash a bigger presence across many tiers of the datacenter.
Putting Administrators in the Driver’s Seat: Software-Defined Datacenters Will Leverage New Hardware Designs
Web-scale computing is driving the emphasis on open hardware designs. “These days, the computer is the datacenter,” said Andy Bechtolsheim, co-founder of Sun Microsystems and chairman of cloud-switch-maker Arista Networks, who has designed many server and storage systems. “What the traditional IT industry missed is that there’s a completely different optimization required to build things cost-effectively for the very large scale datacenters, than if you sell things one at a time.” Now, with a new generation of datacenters, he said: “At scale, you can optimize for power, efficiency, connectivity, and fabrics in a very different way.”
The Software Defined Network (SDN) and open storage are next in terms of open hardware design, said Mark Andreessen, a principal of the VC firm Andreessen Horowitz, who invented the Mosaic GUI-based Internet browser interface in the 1990s. And who could disagree? Software-Defined Networking and Software-Defined Storage are putting administrators in the driver’s seat, as they manage software-defined objects, continually adjusting the computing and storage resources within the datacenter.
Through the OCP process, a virtuous cycle of invention and reinvention is already taking place. And I think that says a lot about communities of developers and architects. The community development process, which we’ve seen in open-source software, is now being applied to the world of devices. That’s a situation that favors technology refresh, and rapid product cycles for systems with flash SSDs. From now on, differentiation will be related to support for processing efficiency—and business results.