AD

Please take a moment and help support this site by visiting our AD. Thank you for your support.

Wednesday, June 08, 2011

Hyper-V Live Migration: A Step-by-Step Guide

Live migration is probably the most important technology that Microsoft has added to Hyper-V in Windows Server 2008 R2. It enables virtual machines (VMs) to be moved between Hyper-V hosts with no downtime. Using live migration, you can migrate all VMs off the Hyper-V host that needs maintenance, then migrate them back when the maintenance is done. In addition, live migration enables you to respond to high resource utilization periods by moving VMs to hosts with greater capacities, thereby enabling the VM to provide end users with high levels of performance even during busy periods.
      Live migrations can be manually initiated, or if you have System Center Virtual Machine Manager 2008 R2 and System Center Operations Manager 2007, you can run automated live migrations in response to workload. You need to complete quite a few steps to set up two systems for live migration, and I’ll guide you through the process. First, I’ll explain how live migration works. Then I’ll cover some of the hardware and software prerequisites that must be in place. Finally, I’ll walk you through the important points of the Hyper-V and Failover Clustering configuration that must be performed to enable live migration.

How Live Migration Works

Live migration takes place between two Hyper-V hosts. Essentially, the VM memory is copied between Hyper-V hosts. After the memory is copied, the VM on the new host can access its virtual hard disk (VHD) files and continue to run. Both hosts access shared storage where the VM’s VHD files are stored. When you initiate a live migration, which Figure 1 shows, the following steps occur:


      1. A new VM configuration file is created on the target server.
      2. The source VM’s initial memory state is copied to the target.
      3. Changed memory pages on the source VM are tagged and copied to the target.
      4. This process continues until the number of changed pages is small.
      5. The VM is paused on the source node.
      6. The final memory state is copied from the source VM to the target.
      7. The VM is resumed on the target.
      8. An Address Resolution Protocol (ARP) is issued to update the network routing tables.

Requirements for Live Migration

On the hardware side, you need two x64 systems with compatible processors. It’s best if the host processors are identical, though it’s not required. However, they do need to be from the same processor manufacturer and family—you can’t perform a live migration when one host has an AMD processor and the other host has an Intel processor. Learn more about Hyper-V processor compatibility in the Microsoft white paper “Virtual Machine Processor Compatibility Mode.”
      In addition, each of the servers should be equipped with at least three NIC cards, running at 1GHz: one for external network connections, one for iSCSI storage connectivity, and one for node management. Ideally, you’d have another NIC dedicated to the live migration, but the live migration can also occur over the external network connection—it will just be a little slower. It’s important to note that if you’re implementing a server consolidation environment, you will want additional NICs for the network traffic of the VMs.
      On the software side, all the nodes that take part in live migration must have Server 2008 R2 x64 installed. This can be the Standard, Enterprise, or Datacenter editions. Live migration is also supported by the Hyper-V Server 2008 R2 product. In addition, the Hyper-V role and the Failover Cluster feature must be installed on all servers participating in live migration.
      You also need shared storage, which can be either an iSCSI SAN or a Fibre Channel SAN. In this example, I used an iSCSI SAN. Be aware that the iSCSI SAN must support the iSCSI-3 specifications, which includes the ability to create persistent reservations, something that live migration requires. Some open-source iSCSI targets such as OpenFiler don’t have that support at this time. If you’re looking to try this for a local test and don’t want to buy an expensive SAN, you might want to check out the free StarWind Server product at http://www.starwind.com/.

Failover Cluster Networking Configuration

Failover clustering is a requirement for live migration. You can live-migrate VMs only between the nodes in the failover cluster. The first step in creating a failover cluster is to configure the networking and storage. You can see an overview of the network configuration used to connect the Windows servers in the cluster to the external network and to the shared storage in Figure 2.


      In Figure 2 the servers are using the network with the subnet of 192.168.100.xxx for client connections. The iSCSI SAN is running on an entirely separate physical network which was configured using the 192.168.0.xxx IP addresses. You can use different values for either of these IP address ranges. I selected these values to more clearly differentiate between the two networks. Ideally, you would also have additional NICs for management and an optional live migration connection, but these aren’t strictly required. Live migration can work with a minimum of two NICs in each server.

1 comment:

  1. Thanks for great information you write it very clean. I am very lucky to get this tips from you.

    Thanks
    Operations Management Service

    ReplyDelete