Data Center Tech Blog

Configuring VMware vSphere’s “Swap to Host Cache” with SanDisk SSDs can control application performance degradation, caused by swapping, to a great extent. This helps increase VM density, maintain performance at a given SLA, and reduce cost per VM.

At SanDisk®, we have been researching and testing the most advantageous impacts of SSDs in virtualized environments. One example of this is VMware’s Virtual SAN (VSAN), which I covered in a recent blog post following the publication of our white paper: “VSAN Deployment and Technical Considerations Guide”.

In the coming series of blog posts, I will examine various other scenarios where SSDs can benefit VDI and virtualized environment workloads, such as Swap to Host Cache, VDI Boot Storm and VDI Admin Operations. These blog posts will be followed by detailed white papers to provide helpful guidelines and technical considerations for how to deploy SSDs to achieve greatest performance benefits and cost efficiencies in virtualized environments.

VMware Swap to Host Cache Experiment Using SanDisk SSDs

In this blog post I wlll demonstrate the benefits of including SanDisk SSDs to significantly accelerate virtualization performance, and in particular when VMware administrators are striving to increase VM density while maintaining application service level agreements (SLAs).

Over-Committing Memory

Over-committing memory (i.e. when the total memory utilized by VMs running on a vSphere host exceeds the physical memory on that host) is a common practice in VMware environments. VMware provides several advanced memory management technologies such as Transparent Page Sharing (TPS), Ballooning, Compression and Memory Swapping to manage memory over-commitment. When memory swapping occurs, the impact of swapping is many fold, unlike physical environment, because many virtual machines (VM) are running on the host. Applications running in each VM will degrade drastically versus in a physical host case, where only one application will suffer.

Over-committing memory definitely provides opportunity for the VMware administrator to increase VM density and reduce the cost per VM. However, if application SLAs are not met, such a feature will not add much value to the deployment strategy. The silver bullet would be to over-commit memory by adding more VMs as much as possible, yet control application performance degradation such that applications’ SLAs are still met. This dual benefit of increased VM density and meeting application SLAs at the same time will improve the Total Cost of ownership (TCO) and Return on investment (ROI) in virtualized environment.

VMware vSphere’s “Swap to Host Cache” feature can help you with addressing exactly this need. Using SanDisk SSDs as memory swapping area for all these VMs will make sure that memory swapping is fast enough not to severely impact the application performance, and help increasing VM density. Note that “Swap to Host Cache” is an optional feature that can only be configured with SSDs and is not permitted with HDDs.

The below diagram depicts the difference between Traditional HDD swap versus Swap to Host Cache using SSD drive.

Fig 1:Traditional Swap vs. Swap to Host Cache on SSD Drive

Fig 1:Traditional Swap vs. Swap to Host Cache on SSD Drive

We, at SanDisk, carried out an experiment to validate this feature and studied the impact of configuring and not configuring this optional feature.

Testing Overview

We carried out this experiment by running DVD store SQL v2.1 workload, an online e-commerce load generator tool, inside VMs. We generated artificial memory pressure in an ESXi host by running another VM named “memtest ISO” which consumes all the memory assigned to it. This resulted in memory swapping on the host, and then we measured the impact on Operations per Minute (OPM) of DVD store SQL v2.1 workload (similar to Transaction per Minute for OLTP workload) running inside the VMs in the same host.

Further we measured how OPM number decreases as memory pressure keeps on growing in the host. We saw that if the “Swap to Host Cache” option is not configured, application performance degradation is severe.

Testing Results

The graph below shows the impact on OPM when memory overcommitment is at 3.4x. This overcommitment calculation was carried out using following formula:

Memory-Overcommitment

As memory pressure increased, the “Swap to Host Cache” becomes critical to reduce the over-commitment impact. The graph shows the OPM value impact for configuring and not configuring “Swap to Host Cache” scenario. It can be seen that when Swap to Host Cache configured the application performance is 4X better than not configuring it.

Fig 2: Operation per Minute (OPM) with significant (3.4x) Overcommitment

Fig 2: Operation per Minute (OPM) with significant (3.4x) Overcommitment

Conclusion

This proves that configuring VMware vSphere’s “Swap to Host Cache” with SanDisk SSDs can control application performance degradation, caused by swapping, to a great extent even under significant memory pressure.

Though this is carried out in a single host, we can easily extrapolate these results to a clustered environment (many hosts). For the storage administrator this means that you leverage memory over-commitment to increase VM density and still maintain overall application performance at a given SLA level.

From a business perspective, the cost per VM can be significantly reduced by configuring this feature, resulting in both CapEx and OpEx savings.

To learn more about SanDisk solutions for virtualization and VDI workloads visit our website. If you have any questions, you can reach me at biswapati.Bhattacharjee/at/sandisk.com, or join the conversation on Twitter with @SanDiskDataCtr

World Cup Media Streaming Sets New Records—and Scores a Goal for More Flash Technology Deployments

Speeds, Feeds and Needs: Latency

bring your data to life

Today’s digital economy with mobile, IoT and cloud is based on the value of data. How do you unlock it? Download the infographic to learn more: