Credits
This article was originally written in:
Equipment
- 1. 2x Servers HP ProLiant ML350 G6:
- - 1 Processor Intel Xeon E5650;
- - 2 discs Sata 500Gb 7k Raid 1;
- - 2 Sources of 750W redundants;
- - 2 NIC's dual-gigabit onboard;
- - 6 NIC's HP NC112T offboard
- 2. 1x Storage HP P2000 ISCSI 1Gb;
- - HP P2000 LFF Modular Smart Array Chassis;
- - HP P2000 G3 iSCSI MSA Controller;
- - 6 discs SAS 300Gb 15k Raid 10
- 3. 3x Switches HP Procurve
Softwares
- - Microsoft Windows Server 2008 R2 SP1 Enterprise Full;
- - Antivírus Symantec EndPoint Protection;
- - Windows and Antivirus updates;
- - Drivers and Firmwares updates.
Roles and Features
- - ISCSI Initiator;
- - MPIO;
- - Failover Cluster;
- - Hyper-V.
Network Interface Card
In a cluster design, the network cards have a very important role, and the choice of the number of cards must be carefully planned. In this project, we built a Failover Clustering (WSFC) with Hyper-V using CSV and Live Migration, and I opted for the following configuration:- 1x Heartbeat;
- 1x CSV;
- 1x Live Migration;
- 1x Host management and LAN communication;
- 2x VM's (NIC Teaming);
- 2x ISCSI;
- 1x Virtual
It’s important to remember that on each Server, the Windows must be installed on the same volume (C usually).
Step by Step
If the servers do not have Disk Raid configured, do so as per the manufacturer's guidance. If you have any questions about how to do this, refer to the manual. In this project, we opted for Raid 1 (mirroring) for a reason. We are just storing a partition for the OS, so we bought only two disks. We recommend using Raid 5, 6 or 10 on partitions that will store data; in this case, we will use storage;
After the Raid is properly configured, install the operating system. In this project, Windows Server 2008 R2 Ent.
Install the antivirus, in this project, Symantec EndPoint Protection;
- Install any drivers that were not automatically installed with the OS. A very important point is to make sure you will install the latest version of the vendor’s website. This will prevent future issues that may occur with the outdated version. In this project, we updated the drivers for the network cards, SAS/SATA controller, video and we installed an HP application to manage the network cards and create TEAM;
- Update the BIOS of the servers. In this project it was not necessary because the latest version was already installed;
- Set up your Storage. In this project, as stated earlier, we used the HP P2000 ISCSI 1Gb in the design. To do this, we connected a network cable to the management interface on the back side and set my network card up in the same network range used by the storage card. We also used the CLI interface, which is via the command line. We will not go into more detail because your project can use another model of Storage
What is essential in this step is that you configure the main resources for the operation of the Cluster, involving:
- The configuration of management network cards;
- The configuration of the ISCSI network cards (IP address of each port, enable Jumbo Frames, set the speed, etc)
- Virtual Disk Configuration. In this project we created only one using the full capacity of the Raid 10 disks for performance and security;
- The configuration of volumes (LUNS). In this project we created four;
- Set up the LUNS masking. This step is the mapping of each LUN to the Cluster nodes, otherwise, when connecting the node to the storage (via ISCSI Initiator), none volume will be available. To do so, you must know the IQN of each server. To do this, open
ISCSI Initiator, Configuration, Initiator Name.
Depending on your storage, when the node is first connected to the storage, the server will automatically be visible in the storage already with its IQN. Then just make the masking for each LUN giving read and write access.
Create a TEAM of the two cards dedicated to VMs. In this project, we used the HP utility. Find out if your manufacturer provides any tools to do this
Configure the network adapters on each node identically.
Heartbeat:
In the card's properties, disable all options except IPv4 and IPv6;
In IPv4, disable DNS and NetBIOS over IP registration;
Set the speed to 1000 / Full;
Set IP and Subnet Mask, only.
CSV Card:
In the properties of the adapter, enable only, Client for Microsoft, File and Printer, IPv4 and IPv6;
In IPv4, disable DNS and NetBIOS over IP registration;
Set the speed to 1000 / Full;
Set IP and Subnet Mask, only.
Live Migration Card:
In the card's properties, disable all options except IPv4 and IPv6;
In IPv4, disable DNS and NetBIOS over IP registration;
Set the speed to 1000 / Full;
Set IP and Subnet Mask, only.
Management / Communication Card Node (physical server):
In the properties of the card, keep all options enabled;
Set the speed to 1000 / Full;
Define IP, Subnet Mask, Gateway, and DNS.
VM Cards:
The two boards were configured in TEAM for balancing and fault tolerance. When creating TEAM, all options will be disabled by default.
ISCSI (communication node> storage) cards:
In the card properties, disable all options except IPv4;
In IPv4, disable DNS and NetBIOS over IP registration;
Set the speed to 1000 / Full, enable the Jumbo package 9014 bytes,
Enable Flow Control;
To test whether Jumbo Frames are supported, use the command ping-l 8000 –f -n 5 <ip storage>.
TEAM Virtual Card between the two VM's boards: (it should have already been created)
This card will be indicated when creating a virtual card in Hyper-V, so it will be automatically configured and in the end, only the Virtual Switch option will be enabled in the properties.
Add the Hyper-V role and the Clustering Failover feature on both Servers. In the Hyper-V installation, don't select any cards for use by VMs. We will do this next;
Open Hyper-V Manager, click Virtual Network Manager, and then External and Add; Choose the description and name of the board, in the External option, select the resulting TEAM board between the two VMS boards. To not create a virtual card for use on the physical host, uncheck Allow Management Operating System to Share this network adapter. In this project, we unchecked it to not create it. Click OK;
Change the order of the boards on each node. Open Control Panel, Network and Internet, Network and Sharing Center, Change adapter settings, Alt-click Advanced, then Advanced Settings.
Set the following order:
- 1º - Physical Node Management
- 2º - ISCSI 1
- 3º - ISCSI 2
- 4º - Live Migration
- 5º - CSV
- 6º - VM1 (TEAM)
- 7º - VM2 (TEAM)
- 8º - Heartbeat
- 9º - Virtual Team
12. Create an unprivileged domain account, just as an ordinary user. Give this user permission to create Computer objects in the entire domain. To do so, follow the steps:
- 1º On the domain controller, open Active Directory Users and Computers;
- 2º Right-click on the domain and then Delegate Control;
- 3º In the wizard that you opened, click Next and add the newly created user to the cluster. Click Next one more time;
- 4º Select Create a custom task to delegate and then Next;
- 5º Click this folder, and then click Next;
- 6º Check only the last box, and select the first option, Create Computer Objects;
- 7º Click Next. Confirm the summary of settings and then Finish.
Add the two nodes into your domain;
Log on as a local administrator on both nodes and add the newly created account to the domain as the Windows Local Administrator. To do so, do the following:
- 1º Open Server Manager;
- 2º Then, Configuration, Local Users and Groups, Groups;
- 3º Double-click Administrators, Add, and enter the username. You will need to enter a domain account to complete this step;
- 4º Click Ok on the open screens, and then log on to Windows with the respective user on both nodes.
Establish a connection between nodes and Storage using the ISCSI Initiator tool. In addition, it will be configured through the MPIO tool, path redundancy between the network adapters of the devices. To add the two tools to the nodes:
- 1º For the ISCSI Initiator, open the control panel and click on it. A screen indicating the automatic startup will appear, click Yes;
- 2º For MPIO, open Server Manager, click Features, Add Features, and check the Multipath I / O option. Click Next and Install.
To configure each tool on the nodes, do the following:
- 1º Open the ISCSI Initiator tool;
- 2º Open the tab called Discovery and click Discover Portal to add the first IP of the Storage (remembering that In this project the storage has two modules adding four ISCSI cards, two for each node);
- 3º On the tab that opens, enter the storage IP address and port 3260 (default) and click on Advanced;
- 4º Under Advanced Settings, under the Local adapter option, select Microsoft iSCSI Initiator. Then, in the IP Initiator option, select the IP of the node that will establish a connection to the storage by IP previously informed, click Ok;
- 5º Navigate to the Targets tab. The newly created connection will be inactive, to connect click the Connect button and then Advanced;
- 6º In the Local Adapter option, select Microsoft iSCSI Initiator. In IP Initiator select the IP of the node and in Target IP Portal select the IP of the storage. Click Ok on the two screens that are open;
- 7º Open the MPIO tool and navigate to the Discover Multi-Paths tab. Click Add support for ISCSI devices and then Add. You will need to restart the server;
- 8º After logging into Windows, open the MPIO tool and check in MPIO Devices if there is a new ID, other than Vendor 8Product 16;
- 9º Just for confirmation, open the ISCSI Initiator tool and then Devices. Check that the available disks are in the storage. They should point to Target 0;
- 10º Returning to the Targets tab, click Connect and check the Enable multi-path option. Then click Advanced ...;
- 11º In the Local Adapter option, select Microsoft iSCSI Initiator. In IP Initiator select the second IP of the node that will connect to the second module of the storage. And in Target IP gateway select the IP of the storage. Click Ok on the two screens that are open;
- 12º Again in Targets, click on Devices and check if the disks now point each to Target 0 and 1;
- 13º Select the first Disk 1 and click MPIO. Under Load balance policy, select the desired policy. Repeat the same procedure for the other disks;
- 14º If the MPIO policy is not what you want, you will need the following procedure to change it:
Open the Disk Management tool in Server Manager; - On each disk, right-click Properties; - Navigate to the MPIO option, select the desired policy and click Apply; - Confirm that Path ID 77030000 is Active / Optimized and Path ID 77030001 is Active / Unoptimized. The target of each is 0 and 1 respectively;
- 15º Still in Disk Management, select each disk, make it online and set the partition as MBR (if it is larger than 2Tb, select GPT). Create simple volume and format with NTFS. You do not need to assign letters to the drive. Then turn the disks into Offline
again.
It is very important that you do this last step on just one of the nodes. After you finish creating the partition and formatting it, leave the disks offline again. If by chance keeping them online simultaneously on the nodes, the partition will be corrupted and the cluster validation will fail a storage test.
Open the Failover Cluster Manager tool and click Validate a Configuration Wizard. Enter the name of the nodes and keep the option to run all the tests.
At thise MPIO option, select the desired policy and click Apply; - Confirm that Path ID 77030000 is Active / Optimized and Path ID 77030001 is Active / Unoptimized. The target of each is 0 and 1 respectively;
It is very important that you do this last step on just one of the nodes. After you finish creating the partition and formatting it, leave the disks offline again. If by chance keeping them online simultaneously on the nodes, the partition will be corrupted and the cluster validation will fail a storage test.
Open the Failover Cluster Manager tool and click Validate a Configuration Wizard. point, you have already configured all the network cards and already have a connection to the Storage with the redundancy of paths. If you have not done all the previous steps, return and be sure to do them.
You can only continue to the cluster creation phase if all the tests pass successfully.
17. After you have passed all the tests in Cluster validation, it’s time to create it by typing the nodes hostname, cluster name, and IP address. Check the summary of the configuration you had created, the quorum must be Node and Disk Majority (Cluster Disk 1 - reserved for quorum) if you have only two nodes and one storage.
18. Now with the cluster created and running, open the Failover Cluster Manager tool, and then click Networks. Only the cards whose TCP / IP is enabled will be displayed, In this project only six. Change the name of each one by the same one previously done in Windows. In each of them, click Properties and configure them as follows:
- · Heartbeat = Allow Cluster network communication on this network
- · CSV = Allow Cluster network communication on this network
- · Live Migration = Do Not Allow Cluster Network communication on this network
- · Management Node = Allow Cluster network communication on this network / Allow Clients to connect to this network
- · ISCSI 1 = Do Not Allow Cluster Network communication on this network
- · ISCSI 2 = Do Not Allow Cluster Network communication on this network
19. After configuring each card for Failover Cluster Manager, I recommend to define the AutoMetric and Metric values on each network adapter manually, to ensure that the traffic passes through the correct cards. The values for each card will be:
- · Heartbeat = Metric = 1500. AutoMetric = False
- · CSV = Metric = 500. AutoMetric = False
- · Live Migration = Metric = 1000. AutoMetric = False
- · Management Node = Metric = 10000. AutoMetric = True
- · ISCSI 1 = Metric = 10100. AutoMetric = True
- · ISCSI 2 = Metric = 10200. AutoMetric = True
To set the above values, open Windows PowerShell Modules in Administrative Tools and run the commands:
To check the current values enter:
Get-ClusterNetwork | Ft Name, Metric, AutoMetric
To set the value on the CSV card:
$Csv = Get-ClusterNetwork namecard
$Csv.Metric = 500
To set the value on the LM card:
$Lm = Get-ClusterNetwork namecard
$Lm.Metric = 1000
To set the value on the HB card:
$Hb = Get-ClusterNetwork namecard
$Hb.Metric = 1500
Confirm the result:
Get-ClusterNetwork | Ft Name, Metric, AutoMetric
The other cards must have the values stated above, otherwise do so as shown in the commands, by changing only the name of the cards.
20. Enable Cluster Shared Volumes in Failover Cluster Manager:
- · Accept the term and click Ok;
- · Right-click the new Cluster Shared Volumes option, then Add storage;
- · Add the disk you will use for the VMs (these are the LUNS made available by Storage);
- · After enabling CSV, you will notice that a directory called ClusterStorage was created on drive C;
- · Inside it, each disk is represented as Volume X, where x is the number of each.
It is in these volumes that all VMs must be stored, so if one node stops for a failure, the other node that is running will be responsible for the continuity of the business.
Now that your cluster has been created, configured, and CSV enabled, it's time to create a VM. To do this, do the following steps:
- 1º In Failover Cluster Manager, right-click Services and Applications, Virtual Machines, New Virtual Machine, select the node to manage it at the first moment;
- 2º In the storage option, enter the CSV path, being C:\ClusterStorage. The remainder should be set to each VM individually.
Now your VM is configured in a high availability environment by Failover Clustering.
If for some reason all the nodes and storage hang up, you may need to intervene with a command to force the quorum to start again as all the nodes have gone off at once. Run the command at a privileged prompt:
Net start clussvc /fq
If needed, run this cmd on all servers in the cluster. To verify that you have quorum again, open Failover Cluster Manager, and in the Quorum Configuration option, you must see it as Node and Disk Majority (or quorum that pertains to your cluster).
Other Languages
This article is also available the following languages:
Passo a Passo Hyper-V e Failover Clustering com Ambiente SAN (pt-BR)