SQL 2014: Deploying AlwaysON availability group in a guest cluster – Part 1

This page starts a new series of articles on building highly available SQL Server 2014 solution using guest clustering and AlwaysON availability group. For this article I’ll focus on building a physical cluster using a PC-based iSCSI target and two host machines; in Part 2 I’ll create two virtual machines – SQL1 and SQL2, install  SQL Server Datacenter to both of them, create a guest cluster and make these virtual machines highly available. Finally, in Part3 I will enable AlwayOn on SQL1 and SQL2 and add one of my test databases to the AlwaysON availabilty group.
The type of the test database I’m going to add to the AlwaysOn group deserves special mention: prior to making a database highly available I prefer to convert it to the contained database – the type of database available since SQL Server 2012.
I think contained databases is one of the  most important features in the latest releases of SQL Server . Why?  Because it  simplifies the complexity of managing SQL Server databases greatly: it introduces new boundaries by allowing to log into a database without the need of having a corresponding server login.
Before SQL Server 2012 all database administrators had to manage permissions at two separate levels: Server instance level (logins) and Database level (db users). Things get much more complicated when it comes to building highly available database solutions. The more SQL servers exist that can potentially host a database due to a failover or  s switchover, the more confusing the process of sychronizing server instance logins across all secondary servers can become. Contained databases allow us to get rid of any server instance dependencies, such as server logins, settings and metadata.
After a database is converted to a contained database it can be made active (e.g. primary) on any server in a cluster with no extra work on sychronizing the db users with the new SQL Server instance logins.

More information:

Here’s  the schematic of what I’m going to build:


I start creating a physical cluster by using three computers named Host1, Host2 and Host3 (all with Windows Server 2012R2 installed): Host1 and Host2 will be the cluster nodes and Host3 will be the iSCSI target. Given that I need to create two VMs for SQL1 and SQL2 I will create three iscsi disks: one for each vm and the third for the cluster quorum resource. All these disks will reside on Host3\M$.

P1-0First of all I need to install the iSCSI target role to Host3:



Once the role has been installed I can proceed to creating iscsi disks:

P1-7P1-7P1-8P1-10P1-11 P1-12P1-13P1-14 P1-15 P1-16 P1-17P1-18This procedure should be repeated two more times for adding disks for SQL1 and SQL2.




Now I can attach these iSCSI disks to Host1 and Host2 by installing iSCSI Initiator on Host1 and Host2:

On Host1:

P1-21P1-23P1-24 P1-25 P1-29

Now in Computer Management\Disk Management I’ll make all newly added disks online


and then initialize them:


On Host2 I must make the same disks online ( not initialize!) again:


The last step in configuring iSCSI disks is to create volumes:

P1-38 P1-39

Now it’s possible to start creating a cluster. First I need to install Failover Clustering feature on Host1 and Host2 (both cluster nodes must be members of a domain) and then proceed to creating a new cluster, for example, on Host1:

P1-41-1 P1-42 P1-43 P1-45 P1-46P1-47

P1-48 All validation checks have completed successfully.

P1-49   By now cluster named CLUSTER with ip = is created. Let’s look at its properties:

P1-51Attention: only Cluster Network1 ( will have Cluster Use set to “Cluster and Client” – Cluster Network 2 will be set to “None”.

P1-60-01 P1-60P1-60-11

Cluster networks: – management/cluster heartbeat network (of course in a production environment it MUST be different networks but I just don’t have enough network cards on my test machines). – iSCSI network

Here’s my hosts’s network connection settings:



Note: HV network adapters will be used in Hyper-V for virtual machines SQL1 and SQL2.

And the last step in configuring the physical cluster: I add Cluster Disk1 and Cluster Disk 2 (the disks to be used for installing SQL1 and SQL2)  to Cluster Shared Storage:




In Part1 of the series of articles on deploying AlwaysON using SQL Server 2014 we created  a physical cluster and got ready for creating virtual machines SQL1 and SQL2 – please see Part2 for a step by step instructions on deploying SQL Server 2014 inside these virtual machines.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: