It's hard to believe that some people still think SQL Server can't be run in a virtual machine (VM). That's definitely not the case. SQL Server can be run in a VM very successfully. However, due to its resource-intensive nature, there are a few best practices that you need to follow. They can be the difference between a poorly performing virtual SQL Server instance and a virtualized instance that provides exceptional performance. Here are some of the most essential tips for successfully virtualizing SQL Server.
Tip 1: Take Advantage of Second Level Address Translation
To support bare metal virtualization platforms like VMware's vSphere Server and Microsoft's Hyper-V, you need to make sure that the host uses a 64-bit x64 processor. This can be either an AMD or Intel processor. In addition, for a resource-intensive workload like SQL Server, it's vital that the host processor supports Second Level Address Translation. SLAT is known by different names, depending on the CPU manufacturer. Intel calls it Extended Page Tables, whereas AMD calls it nested page tables or Rapid Virtualization Indexing.
VMs use physical host memory, and SLAT is a mechanism that enables the CPU to maintain the mapping between the virtual memory used in the VMs and the physical memory in the virtualization host. If this memory mapping function isn't performed by the CPU, it must be performed by the hypervisor. Having the CPU perform the memory mapping function provides better performance and improved scalability. Microsoft studies have shown that SLAT significantly reduces the hypervisor processing overhead to about 2 percent and simultaneously drops the host memory requirements by about 1MB per running VM. You can see an overview of how SLAT works to improve VM performance in Figure 1.
Most modern server platforms come equipped with x64 processors that have SLAT support. However, if you're considering virtualizing SQL Server on older systems, this might not be the case. Many older x64 systems are lacking SLAT support. So, if you're going to virtualize SQL Server on an older system, make sure it supports SLAT.
Tip 2: Don't Overcommit Your Physical Processors
SQL Server can automatically take advantage of multiple CPUs if they're present. If you're migrating from a physical SQL Server implementation to a virtualized SQL Server implementation, you can use the number of CPUs in the physical implementation as a guide for the number of virtual CPUs that you'll configure in the SQL Server VM. If you're creating a new SQL Server implementation, you should initially follow the application vendor's advice or use Performance Monitor from within the virtual instance to track CPU utilization.
The virtualization platform makes a difference here, particularly if you're implementing a SQL Server virtual instance that requires a high degree of scalability. Hyper-V in Windows Server 2008 and Windows Server 2008 R2 is limited to four virtual CPUs, so there's a definite ceiling for your virtual SQL Server instances on those platforms. Windows Server 2012 lifts this ceiling by boosting the virtual CPU support to 64 virtual CPUs. VMware vSphere 5.0 supports 32 virtual CPUs, and the newer vSphere 5.1 supports up to 64 virtual CPUs. If your virtualized SQL Server workload is highly CPU intensive, you need to be running Windows Server 2012 Hyper-V or vSphere 5.0 or later.
Tip 3: Use Dynamic Memory
Memory is one of the biggest factors in SQL Server performance. This is true whether you're configuring physical or virtual SQL Server instances. SQL Server uses memory for its internal buffer and procedure caches. Basically, the buffer cache contains recently used data, whereas the procedure cache contains recently executed T-SQL commands. These buffers enable SQL Server to achieve higher levels of performance, because they allow SQL Server to get the data and commands it needs from the caches rather than incurring the I/O overhead of going to the disk. SQL Server can automatically manage and grow its buffer and procedure caches based on the requirements of the workload and the memory that's available. Therefore, having available memory is vital for good SQL Server performance.
However, over-allocating memory will help only to a certain point. Plus, in a virtual server environment, physical memory limits how many VMs you can have active at any one time. When planning your virtual server capacity, you need to make sure that you allocate the required amount of memory but not over-allocate, because this would take memory away from other VMs. Dynamic memory provides a great solution to SQL Server's need for memory and the need to share the same physical memory between VMs.
In order for SQL Server to take advantage of dynamic memory, the guest OS must be able to recognize hot-add RAM. To use Hyper-V dynamic memory, the Hyper-V host must be running Windows Server 2008 R2 SP1 or later. In addition, the guest OS running in the VM must support the ability to hot-add RAM. The following guest OSs can utilize Hyper-V dynamic memory:
- Windows Server 2012
- Windows Server 2008 R2 SP1
- Windows Server 2008 SP2
- Windows Server 2003 R2 SP2
- Windows 8
- Windows 7 SP1
- Windows Vista with SP1
To take advantage of dynamic memory, you need to be using the Enterprise edition of SQL Server 2008 or later or using the Datacenter edition of SQL Server 2008 R2 or SQL Server 2008. The other SQL Server editions don't have the ability to utilize hot-add RAM and can't take advantage of dynamic memory.
When new memory is allocated to a guest VM, it looks like hot-add physical memory to the SQL Server instance. When SQL Server processes workloads, the sqlservr.exe process allocates and commits memory from the OS. SQL Server dynamic growth requires three factors:
- A workload
- A maximum server memory setting higher than the current memory allocation
- High-memory signals from the OS
When a SQL Server workload causes the sqlserver.exe process to grow, SQL Server will detect the added memory and grow to meet the workload demand. SQL Server checks OS memory every second and dynamically adjusts its memory according to the available memory and the maximum server memory setting.
The white paper "Running SQL Server with Hyper-V Dynamic Memory" discusses a set of workload tests Microsoft conducted that compared eight SQL Server VMs. In one set of tests, a sample workload ran on eight VMs configured with 7.5GB of static memory and on eight VMs configured for dynamic memory. The dynamic memory settings were 2GB of startup memory and 12GB of maximum memory. The workload ran at 12,500 SQL batches per second. In the dynamic memory scenario, the systems averaged 7,500 sustained I/O operations per second (IOPS), whereas in the static memory scenario, the systems required 12,000 sustained IOPS. In other words, dynamic memory supported the same throughput with only about 60 percent of the total IOPS.
Figure 2 shows the results of the dynamic memory test in Performance Monitor.
The black line shows the SQL Server buffer pool memory. The green line shows the number of disk reads per second. In the beginning of the test run, you can see that the number of disk reads was very high, but as additional memory was allocated to the VM, SQL Server was able to increase the size of its buffer pool. As a result, the number of disk reads dropped by nearly 50 percent.
SQL Server doesn't like to give up memory once it has allocated it. However, SQL Server will shrink its caches in response to memory pressure when the OS sends low-memory signals. SQL Server Resource Monitor checks for low-memory signals about every five seconds and will attempt to free memory until the low-memory signals stop.
If the available VM guest memory decreases when the hypervisor attempts to take memory away from the running VM and give it to another VM, the guest OS might page out parts of the SQL Server working set. This can have a very negative impact on SQL Server performance. To prevent this situation from happening, Microsoft recommends using SQL Server's Locked Page Memory Model setting to ensure that SQL Server memory is never paged out. You can find detailed information about using dynamic memory with SQL Server in the article "Using Hyper-V Dynamic Memory with SQL Server."
Although dynamic memory is a valuable tool for making more effective use of your virtualization resources, it's important to remember that it's not a silver bullet for performance problems. It's no substitute for good database design and tuning. You still need to use Performance Monitor to track your production servers' workload characteristics because system demands and performance often change over time.
Tip 4: Use Fixed Virtual Hard Disks or Pass-Through Disks
There are three basic types of Virtual Hard Disks (VHDs) that you can use with VMs, no matter whether you're using Hyper-V or vSphere. The three types of VHDs are:
- Dynamic. The dynamic VHD is initially sized according to the actual guest OS's storage requirements and can expand dynamically in response to increased storage requirements until it reaches its maximum allocated size.
- Fixed. The fixed VHD is initially sized and allocated at its maximum size.
- Differencing disks. The VHD is initially created using a base image that serves as a parent. You can then create different child disks as needed. The child disks use the parent as a base, but changes to each child disk are maintained independently.
The fixed VHD is almost always the best choice for virtualized SQL Server systems that run a production workload. Fixed disks use more storage, but they provide the best performance of all the different VHD options. Dynamic VHDs are a good choice for labs, test environments, and noncritical production workloads. Dynamic VHDs use less disk space, but they don't provide the same level of performance as a fixed hard disk. In addition, workloads running on dynamic VHDs can experience occasional pauses when the dynamic disk needs to be extended. Differencing disks are best suited for lab environments in which disk space is at a premium. They use far less storage, but they also have much lower levels of performance.
Another storage option that exists for VMs is to use pass-through disks. Pass-through disks are an alternative to VHDs. Pass-through disks essentially dedicate a portion of the host's storage directly on the VM. The storage can either be a physical disk internal to the Hyper-V server or it can be on a SAN LUN mapped to the virtualization server. Pass-through disks offer the highest level of performance for VM storage. However, they don't have the flexibility of a fixed VHD. Pass-through disks can't be moved without incurring downtime, and they don't support VM snapshots.
Generally, the best practice is to use fixed VHDs unless you absolutely must have that extra bit of performance that's afforded by pass-through disks. You can use dynamic disks for noncritical production workloads, as well as for testing, training, and lab environments.
Tip 5: Don't Use the Default Storage Configuration
Just like a physical server, a virtual server's disk configuration can make a huge impact on SQL Server performance. If you accept the default storage configuration that's offered by vSphere or Hyper-V, you'll end up with a poorly performing virtual SQL Server instance because the default configuration uses a single VHD for storage. In other words, your OS files, SQL Server data files, and SQL Server log files will all end up on the same VHD. The default storage configuration would be suitable only for small virtual SQL Server instances with low transaction rates. Most production workloads with higher transaction rates would immediately run into disk contention problems.
For a high-performance production virtual SQL Server instance, it's important that you put your OS files, data files, and log files on different VHDs or pass-through disks. If you're using a shared storage solution, it's also important that you be aware of the physical disk implementation and make sure that the disks used for the SQL Server log files are separate from the disks used for the SQL Server data files. You can see an example of this type of virtual storage design in Figure 3.
Tip 6: Be Aware of Licensing Gotchas
There's no doubt that licensing is one of the most important yet most confusing aspects of any Microsoft product to figure out. Virtualization makes it even more complex. SQL Server is no exception to this rule. Some of the changes in the SQL Server 2012 editions and the new core-based licensing can make licensing virtual SQL Server instances confusing.
Table 1 presents the base prices and licensing models of the various SQL Server 2012 editions.
Per server and CAL
$8,592 + $209 for each CAL
Per server and CAL
$898 + $209 for each CAL
As you can see, the Enterprise edition is only licensed per core, whereas the new Business edition is only licensed per server and CAL. The Standard edition can be licensed either per core or per server and CAL. SQL Server 2012's core licensing requires that you buy a minimum of four core licenses. You buy additional core licenses in packs of two. When you implement SQL Server 2012 on a VM, each virtual processor equates to a core. For example, if you license SQL Server 2012 Standard edition per core and install it onto a VM with four virtual processors, you need to buy four core licenses. The only exception is if you buy the SQL Server 2012 Enterprise edition and license all the cores on the physical machine. You can then run an unlimited number of SQL Server instances on VMs on that host. It doesn't matter if the virtualization platform has a Hyper-V or vSphere host.
Another possible gotcha is license mobility. Although technically you can use live migration to move VMs between hosts, there are licensing limitations that govern how frequently you can move VMs. OEM licenses can only be moved once every 90 days, which isn't very conducive to creating a dynamic data center. To move a VM more frequently, you must have the license covered by Software Assurance (SA). This makes SA a requirement if you're planning to implement a dynamic data center or private cloud.
The Virtual Reality
Although SQL Server was once thought to be impossible to virtualize, nowadays it's clear that virtualizing SQL Server provides many benefits over a purely physical implementation. Consolidating multiple servers as VMs will:
- Increase your hardware platforms' ROI because you're better utilizing the physical resources
- Reduce management overhead because you're reducing the number of physical servers that you need to manage
- Reduce your IT infrastructure's power, cooling, and space requirements
In addition, virtualization lays the foundation for the software-defined data center. If you pay attention to planning and follow the best practices, virtualized SQL Server instances can provide the performance and scalability to effectively support your mission-critical database applications.