I’ve already written about guest cluster in my previous blog posts – now I’d like to create a new guest cluster using Windows Server 2016 as a host OS and a multipath Windows Server 2016 – based iSCSI target. In this article I’ll walk all the steps required to create a two-way iSCSI connection to the iSCSI target. Here’s the schematic of my test network:
Host3 – iSCSI Target, WIndows Server 2016 Standard edition (of course in the productional networks iSCSI targets should not be connected to any switches except the dedicated iscsi switches – in this test I use HOST3 network connection only for the convenience).
I’ve already described the process of creating an iSCSI target in Windows Server 2012R2 so I’ll just recap what should have been done in Windows Server 2016 to create an iSCSI target as the process has not changed since Windows Server 2012R2:
Once the installation is complete I can proceed to creating folders that would be the virtual iSCSI disks on Host3. Since I’m going to create a guest cluster with the virtual domain controller and the virtual Exchange Server I will create three folders on different drives: the first one – iSCSI for the cluster forum, the second – iSCSI1 – for the virtual DC and the third one – iSCSI2 – for the virtual Exchange 2016.
3) Connect to the target:
As Host1 is connected to the target (Host3) via the two independent paths I should connect to the target twice and configure two pairs of target/initiator addresses – for this I click Advanced… button:
On the Volumes and Devices tab we can see the three iSCSI VHDs:
Please note that the third volume still doesn’t use MPIO (there’s no mpio prefix after \) so I press Auto Configure and all three volumes “become multipathed”:
As you can see there two active connections to the target because by default MPIO uses Round Robin load balance policy – it allows to use both paths simultaniously for load distribution.
Once the multipath connection to the target is established, I can proceed to formatting the newly-created disks (in fact iSCSI VHDs) in the Disk Management:
Now let’s see the default MPIO configuration (please run PowerShell as Administrator):
Please note that the DefaultTimeoutValue is set to 60 seconds – it means that if one of the paths fails MPIO will wait 60 seconds before failing over the current IO operations (which were carried out “along” that failed path) to the second path. I think this is too long so I’ll change this value to 20 seconds:
Once the server is restarted I can test the multipath connection by copying the file to iSCSI volume and unplugging one of the path-cords from the server: after the disk timeout period (20 sec.) the copy operation should be resumed.
Note: you may find the picture above (with the copying bar) not too much descriptive – this is due to the speed scale WinServer 2016 chooses (for the unknown reason!) for the bar: the actual copying speed <=250 MBps (using 2 x 1Gbps network adapters), that’s why the speed green “bar” is so “weak”.
MPIO in Windows Server 2016 does provide both network speed aggregation and network attached storage fault tolerance.