Connecting to the iSCSI target using MPIO in Windows Server 2016

I’ve already written about guest cluster in my previous blog posts – now I’d like to create a new guest cluster using Windows Server 2016 as a host OS and a multipath Windows Server 2016 – based iSCSI target. In this article I’ll walk all the steps required to create a two-way iSCSI connection to the iSCSI target. Here’s the schematic of my test network:

iscsi1

Network settings:

Host3 – iSCSI Target, WIndows Server 2016 Standard edition.
61-1

Host1 – iSCSI initiator, WIndows Server 2016 Standard edition. 61

All network adapters to be used for iSCSI traffic should have only IP addresses and subnet masks set up – all other options/settings should be blank:
02

I’ve already described the process of creating an iSCSI target in Windows Server 2012R2 so I’ll just recap what should have been done in Windows Server 2016 to create an iSCSI target as the process has not changed since Windows Server 2012R2:

1) On Host3: I start Add Roles and Features Wizard and add iSCSI services to Host3 as depicted below:
07
11

14

Once the installation is complete I can proceed to creating folders that would be the virtual iSCSI disks on Host3. Since I’m going to create a guest cluster with the virtual domain controller and the virtual Exchange Server I will create three folders on different drives: the first one – iSCSI for the cluster forum, the second – iSCSI1 – for the virtual DC and the third one – iSCSI2 – for the virtual Exchange 2016.

2123

Now I create virtual iscsi drives and the new target:
18

2425

26 2729

30 3132 3334 35

The first VHD and the target are created:36

In the same manner I add the other two VHDs to the same target:
50

2) On Host1: I start Add Roles and Features Wizard and add MPIO to Host1
63 646566

Let’s run MPIO:70

As you can see, by default MPIO does not have iSCSI support:71

…so I will add it:
72and restart the server:

73

Now iSCSI support is enabled:
74

3) Connect to the target:

111-1

As Host1 is connected to the target (Host3) via the two independent paths I should connect to the target twice and configure two pairs of target/initiator  addresses – for this I click Advanced… button:

125-1

123

– now I press OK twice and return to the iSCSI Initiator’s Target window
124

Once again, I click Connect and enter the second pair of addresses:
125-2126
…press OK twice.

On the Volumes and Devices tab we can see the three iSCSI VHDs:
113-1
Please note that the third volume still doesn’t use MPIO (there’s no mpio prefix after \) so I press Auto Configure and all three volumes “become multipathed”:
113-2

4) To make sure MPIO is working I press Devices… and then MPIO133

135137

As you can see there two active connections to the target because by default MPIO uses Round Robin load balance policy – it allows to use both paths simultaniously for load distribution.

We can click on each path and see its properties by pressing Details
138 139140

Once the multipath connection to the target is established, I can proceed to formatting the newly-created disks (in fact iSCSI VHDs) in the Disk Management:

138

139

140

141

142

23 24

25 26

In the same manner I bring online and format the other two iSCSI disks:
27-good

Now let’s see the default MPIO configuration (please run PowerShell as Administrator):

Get-Command -Module MPIO
41

Please note that the DefaultTimeoutValue is set to 60 seconds – it means that if one of the paths fails MPIO will wait 60 seconds before failing over the current IO operations (which were carried out “along” that failed path) to the second path. I think this is too long so I’ll change this value to 20 seconds:

Set-MPIOSetting -NewDiskTimeout 2042

54-1

Once the server is restarted I can test the multipath connection by copying the file to iSCSI volume and unplugging one of the path-cords from the server: after the disk timeout period (20 sec.) the copy operation should be resumed.

51 52

55

Note: you may find the picture above (with the copying bar) not too much descriptive – this is due to the  speed scale WinServer 2016 chooses (for the unknown reason!) for the bar: the actual  copying speed <=250 MBps  (using 2 x 1Gbps network adapters), that’s why the speed green “bar” is so “weak”.

Summary:

MPIO in Windows Server 2016 does provide both network speed aggregation and network attached storage fault tolerance.

%d bloggers like this: