Table of Contents

Credits

This article was originally written in:

https://social.technet.microsoft.com/wiki/pt-br/contents/articles/37944.passo-a-passo-hyper-v-e-failover-clustering-com-ambiente-san.aspx

Equipment

  1. 1. 2x Servers HP ProLiant ML350 G6:
    • - 1 Processor Intel Xeon E5650;
    • - 2 discs Sata 500Gb 7k Raid 1;
    • - 2 Sources of 750W redundants;
    • - 2 NIC's dual-gigabit onboard;
    • - 6 NIC's HP NC112T offboard
  2. 2. 1x Storage HP P2000 ISCSI 1Gb;
    • - HP P2000 LFF Modular Smart Array Chassis;
    • - HP P2000 G3 iSCSI MSA Controller;
    • - 6 discs SAS 300Gb 15k Raid 10
  3. 3. 3x Switches HP Procurve

Softwares

Roles and Features

 

Network Interface Card

 In a cluster design, the network cards have a very important role, and the choice of the number of cards must be carefully planned. In this project, we built a Failover Clustering (WSFC) with Hyper-V using CSV and Live Migration, and I opted for the following configuration:   Neither of them can be on the same network, otherwise, it will not pass in the Cluster validation. Keep a naming pattern and rename all of them, remembering that it should be the same name on both nodes.  

It’s important to remember that on each Server, the Windows must be installed on the same volume (C usually).

Step by Step

If the servers do not have Disk Raid configured, do so as per the manufacturer's guidance. If you have any questions about how to do this, refer to the manual. In this project, we opted for Raid 1 (mirroring) for a reason. We are just storing a partition for the OS, so we bought only two disks. We recommend using Raid 5, 6 or 10 on partitions that will store data; in this case, we will use storage;

After the Raid is properly configured, install the operating system. In this project, Windows Server 2008 R2 Ent.

Install the antivirus, in this project, Symantec EndPoint Protection;

What is essential in this step is that you configure the main resources for the operation of the Cluster, involving:

Create a TEAM of the two cards dedicated to VMs. In this project, we used the HP utility. Find out if your manufacturer provides any tools to do this

Configure the network adapters on each node identically.

Heartbeat:

In the card's properties, disable all options except IPv4 and IPv6;

In IPv4, disable DNS and NetBIOS over IP registration;

Set the speed to 1000 / Full;

Set IP and Subnet Mask, only.

 

CSV Card:

In the properties of the adapter, enable only, Client for Microsoft, File and Printer, IPv4 and IPv6;

In IPv4, disable DNS and NetBIOS over IP registration;

Set the speed to 1000 / Full;

Set IP and Subnet Mask, only.

 

Live Migration Card:

In the card's properties, disable all options except IPv4 and IPv6;

In IPv4, disable DNS and NetBIOS over IP registration;

Set the speed to 1000 / Full;

Set IP and Subnet Mask, only.

 

Management / Communication Card Node (physical server):

In the properties of the card, keep all options enabled;

Set the speed to 1000 / Full;

Define IP, Subnet Mask, Gateway, and DNS.

 

VM Cards:

The two boards were configured in TEAM for balancing and fault tolerance. When creating TEAM, all options will be disabled by default.

 

ISCSI (communication node> storage) cards:

In the card properties, disable all options except IPv4;

In IPv4, disable DNS and NetBIOS over IP registration;

Set the speed to 1000 / Full, enable the Jumbo package 9014 bytes,

Enable Flow Control;

To test whether Jumbo Frames are supported, use the command ping-l 8000 –f -n 5 <ip storage>.

 

TEAM Virtual Card between the two VM's boards: (it should have already been created)

This card will be indicated when creating a virtual card in Hyper-V, so it will be automatically configured and in the end, only the Virtual Switch option will be enabled in the properties.

Add the Hyper-V role and the Clustering Failover feature on both Servers. In the Hyper-V installation, don't select any cards for use by VMs. We will do this next;

 

Open Hyper-V Manager, click Virtual Network Manager, and then External and Add; Choose the description and name of the board, in the External option, select the resulting TEAM board between the two VMS boards. To not create a virtual card for use on the physical host, uncheck Allow Management Operating System to Share this network adapter. In this project, we unchecked it to not create it. Click OK;

 

 Change the order of the boards on each node. Open Control Panel, Network and Internet, Network and Sharing Center, Change adapter settings, Alt-click Advanced, then Advanced Settings.  

Set the following order:

  1. 1º - Physical Node Management
  2. 2º - ISCSI 1
  3. 3º - ISCSI 2
  4. 4º - Live Migration
  5. 5º - CSV
  6. 6º - VM1 (TEAM)
  7. 7º - VM2 (TEAM)
  8. 8º - Heartbeat
  9. 9º - Virtual Team

 

12. Create an unprivileged domain account, just as an ordinary user. Give this user permission to create Computer objects in the entire domain. To do so, follow the steps:  

  1. 1º On the domain controller, open Active Directory Users and Computers;
  2. 2º Right-click on the domain and then Delegate Control;
  3. 3º In the wizard that you opened, click Next and add the newly created user to the cluster. Click Next one more time;
  4. 4º Select Create a custom task to delegate and then Next;
  5. 5º Click this folder, and then click Next;
  6. 6º Check only the last box, and select the first option, Create Computer Objects;
  7. 7º Click Next. Confirm the summary of settings and then Finish.

 

 Add the two nodes into your domain;

 

 Log on as a local administrator on both nodes and add the newly created account to the domain as the Windows Local Administrator. To do so, do the following:

  1. 1º Open Server Manager;
  2. 2º Then, Configuration, Local Users and Groups, Groups;
  3. 3º Double-click Administrators, Add, and enter the username. You will need to enter a domain account to complete this step;
  4. 4º Click Ok on the open screens, and then log on to Windows with the respective user on both nodes.

  

 Establish a connection between nodes and Storage using the ISCSI Initiator tool. In addition, it will be configured through the MPIO tool, path redundancy between the network adapters of the devices.   To add the two tools to the nodes:  

  1. 1º For the ISCSI Initiator, open the control panel and click on it. A screen indicating the automatic startup will appear, click Yes;
  2. 2º For MPIO, open Server Manager, click Features, Add Features, and check the Multipath I / O option. Click Next and Install.

 

To configure each tool on the nodes, do the following:

  1. 1º Open the ISCSI Initiator tool;
  2. 2º Open the tab called Discovery and click Discover Portal to add the first IP of the Storage (remembering that In this project the storage has two modules adding four ISCSI cards, two for each node);
  3. 3º On the tab that opens, enter the storage IP address and port 3260 (default) and click on Advanced;
  4. 4º Under Advanced Settings, under the Local adapter option, select Microsoft iSCSI Initiator. Then, in the IP Initiator option, select the IP of the node that will establish a connection to the storage by IP previously informed, click Ok;
  5. 5º Navigate to the Targets tab. The newly created connection will be inactive, to connect click the Connect button and then Advanced;
  6. 6º In the Local Adapter option, select Microsoft iSCSI Initiator. In IP Initiator select the IP of the node and in Target IP Portal select the IP of the storage. Click Ok on the two screens that are open;
  7. 7º Open the MPIO tool and navigate to the Discover Multi-Paths tab. Click Add support for ISCSI devices and then Add. You will need to restart the server;
  8. 8º After logging into Windows, open the MPIO tool and check in MPIO Devices if there is a new ID, other than Vendor 8Product 16;
  9. 9º Just for confirmation, open the ISCSI Initiator tool and then Devices. Check that the available disks are in the storage. They should point to Target 0;
  10. 10º Returning to the Targets tab, click Connect and check the Enable multi-path option. Then click Advanced ...;
  11. 11º In the Local Adapter option, select Microsoft iSCSI Initiator. In IP Initiator select the second IP of the node that will connect to the second module of the storage. And in Target IP gateway select the IP of the storage. Click Ok on the two screens that are open;
  12. 12º Again in Targets, click on Devices and check if the disks now point each to Target 0 and 1;
  13. 13º Select the first Disk 1 and click MPIO. Under Load balance policy, select the desired policy. Repeat the same procedure for the other disks;
  14. 14º If the MPIO policy is not what you want, you will need the following procedure to change it:

     Open the Disk Management tool in Server Manager; - On each disk, right-click Properties; - Navigate to the MPIO option, select the desired policy and click Apply; - Confirm that Path ID 77030000 is Active / Optimized and Path ID 77030001 is Active / Unoptimized. The target of each is 0 and 1 respectively;

  15. 15º Still in Disk Management, select each disk, make it online and set the partition as MBR (if it is larger than 2Tb, select GPT). Create simple volume and format with NTFS. You do not need to assign letters to the drive. Then turn the disks into Offline again.

    It is very important that you do this last step on just one of the nodes. After you finish creating the partition and formatting it, leave the disks offline again. If by chance keeping them online simultaneously on the nodes, the partition will be corrupted and the cluster validation will fail a storage test.

 

Open the Failover Cluster Manager tool and click Validate a Configuration Wizard. Enter the name of the nodes and keep the option to run all the tests.

At thise MPIO option, select the desired policy and click Apply; - Confirm that Path ID 77030000 is Active / Optimized and Path ID 77030001 is Active / Unoptimized. The target of each is 0 and 1 respectively;

  • 15º Still in Disk Management, select each disk, make it online and set the partition as MBR (if it is larger than 2Tb, select GPT). Create simple volume and format with NTFS. You do not need to assign letters to the drive. Then turn the disks into Offline again.

    It is very important that you do this last step on just one of the nodes. After you finish creating the partition and formatting it, leave the disks offline again. If by chance keeping them online simultaneously on the nodes, the partition will be corrupted and the cluster validation will fail a storage test.

  •  

    Open the Failover Cluster Manager tool and click Validate a Configuration Wizard. point, you have already configured all the network cards and already have a connection to the Storage with the redundancy of paths. If you have not done all the previous steps, return and be sure to do them.

    You can only continue to the cluster creation phase if all the tests pass successfully.

     

    17. After you have passed all the tests in Cluster validation, it’s time to create it by typing the nodes hostname, cluster name, and IP address. Check the summary of the configuration you had created, the quorum must be Node and Disk Majority (Cluster Disk 1 - reserved for quorum) if you have only two nodes and one storage.

       

    18. Now with the cluster created and running, open the Failover Cluster Manager tool, and then click Networks. Only the cards whose TCP / IP is enabled will be displayed, In this project only six. Change the name of each one by the same one previously done in Windows. In each of them, click Properties and configure them as follows:  

    19. After configuring each card for Failover Cluster Manager, I recommend to define the AutoMetric and Metric values on each network adapter manually, to ensure that the traffic passes through the correct cards. The values for each card will be:  

    To set the above values, open Windows PowerShell Modules in Administrative Tools and run the commands:

    To check the current values enter:

    Get-ClusterNetwork | Ft Name, Metric, AutoMetric

    To set the value on the CSV card:

    $Csv = Get-ClusterNetwork namecard

    $Csv.Metric = 500

    To set the value on the LM card:

    $Lm = Get-ClusterNetwork namecard

    $Lm.Metric = 1000

    To set the value on the HB card:

    $Hb = Get-ClusterNetwork namecard

    $Hb.Metric = 1500

    Confirm the result:

    Get-ClusterNetwork | Ft Name, Metric, AutoMetric

    The other cards must have the values stated above, otherwise do so as shown in the commands, by changing only the name of the cards.

     

    20. Enable Cluster Shared Volumes in Failover Cluster Manager:

    It is in these volumes that all VMs must be stored, so if one node stops for a failure, the other node that is running will be responsible for the continuity of the business.

       

    Now that your cluster has been created, configured, and CSV enabled, it's time to create a VM. To do this, do the following steps:

    1. 1º In Failover Cluster Manager, right-click Services and Applications, Virtual Machines, New Virtual Machine, select the node to manage it at the first moment;
    2. 2º In the storage option, enter the CSV path, being C:\ClusterStorage. The remainder should be set to each VM individually.

    Now your VM is configured in a high availability environment by Failover Clustering.

     

    If for some reason all the nodes and storage hang up, you may need to intervene with a command to force the quorum to start again as all the nodes have gone off at once. Run the command at a privileged prompt:

    Net start clussvc /fq

    If needed, run this cmd on all servers in the cluster. To verify that you have quorum again, open Failover Cluster Manager, and in the Quorum Configuration option, you must see it as Node and Disk Majority (or quorum that pertains to your cluster).

     

    Other Languages

    This article is also available the following languages:

    Passo a Passo Hyper-V e Failover Clustering com Ambiente SAN (pt-BR)