All the talk about the Oscars turned my thoughts to the world of movie-making-and to its ever-growing need for processing speed and data capacity.
Today, movies are an all-digital medium. They are rendered on server farms—and the finished product is delivered to movie theatres as a gigantic data file. In-between, there are multiple stages in the movie life-cycle for compositing, editing, finalizing, and storing the images. In fact, there is a specialized datacenter infrastructure, involving thousands of processor cores, and lots of memory, that supports this life-cycle.
The evolution of 3D movies like Gravity and Mr. Peabody & Sherman, and new special effects makes the demand for performance and storage capacity in those film studio datacenters even stronger. To add richer detail to each image, supporting high-resolution images (4K and 8K standards) data capacity in film studio render farms will grow from hundreds of thousands of terabytes (TB) of data storage, reaching the petabytes (PB) range.
Let’s take a look at the changes in movie-making infrastructure that occurred between 1997 and 2009. These years are the “bookmark years” when Director James Cameron produced two great films: Titantic and Avatar.
In 1997, Cameron’s team relied on 350 SGI processors, and 200 DEC Alpha processors – with 5 TB of disk capacity — to render the movie, with all its CGI effects . At the time, the SGI servers and DEC Alpha servers were among the fastest in the world – leveraging 64-bit computing to create visual special effects and immersive CGI graphics. The team leveraged a 100Mbps network to link the servers in the rendering farm together.
By 2009, the specifications had changed dramatically: Weta Digital, based in New Zealand, hosted the rendering farm for Avatar. The production studio used 34 racks of computers, with 4,000 HP blade servers running 40,000 processors. The server farm rendering capacity was supported by 104 TB of overall computer memory. To link all the servers together with high bandwidth, Weta Digital used 10GB per second (10GBps) networking interconnects.
Other film studios have made similar investments in building out their datacenter. Last year, Dreamworks Animation SKG provided some details of its rendering farm for “The Croods” movie. The rendering engines and associated storage required 250TB of memory to make the movie—and about 70TB of data will be archived for later re-use.
The all-digital movies are shipping richer content to movie-theatre audiences—but they are doing so in less time than before, given the technology refresh that’s updating the infrastructure inside the film studios’ datacenters. Film studios are saying that the computing power has made complex animation tasks faster to complete, including richer textures on clothing, characters, background scenes – and even Monsters’ blue hair.
The Movie-Making Lifecycle
In the movie lifecycle, some processes are CPU-centric (rendering), while others are I/O-centric (playback, editing and data transmission). Rendering the movie is only the first step in the process. For playback and media editing, mixed-use modes for storage are prevalent—but read-intensive modes are used for media-streaming to workstations, servers and high-capacity data repositories. Delivering the finished movie, via-media-streaming, and archiving the film, completes the cycle.
What Does This Mean for Flash in Films?
Flash storage is widely recognized for its compact form-factors, its high capacity levels – and its ability to accelerate performance. Its ability to “cache” large amounts of data is well-known – and leveraged for work with transactional workloads and database updates. It is also less expensive than DRAM, which could impact future technology refresh, with more flash installed in servers and server blades.
We already know that flash storage accelerates many workloads in the data center, from financial applications online transaction processing (OLTP), to high performance computing (HPC). In those workload areas, flash SSDs accelerate job completion, improving results by 3-5X, or more.
Now, in the film business, we’re seeing flash being leveraged inside digital cameras and inside the artists’ workstations for editing purposes. We’re seeing flash in media editing and media streaming – with high I/O rates and large data-sets – which moves videos over the Internet, and high-speed datacenter networks. And we’re seeing it inside the servers that distribute the digital content throughout the film studio—and deliver the final product to movie theatres.
It’s my view that flash will be taking on wider roles within the movie-making process, in coming years. Many film studios are already leveraging flash technologies — or mixing flash with hard-disk drives (HDDs) in hybrid deployments. Several attributes of flash make it stand out as a natural for the movie lifecycle. Flash SSDs support large data-sets and high IOPS rates – and they plug into standards-based interfaces like SAS, SATA and PCI-e. More importantly, they have the ability to accelerate many phases of the movie-making process, allowing applications to run much faster. With all those forces already in play, my question is this: How quickly will film studios transform into flash-enabled datacenters? Look for flash SSDs to do a “star turn” in the film studios’ infrastructure, and look for flash adoption to grow in this industry sector over the next couple of years.