How to: Configure SQL Server 2012 AlwaysOn – Part 1 of 7

How to: Configure SQL Server 2012 AlwaysOn

Scope and Architecture

This is the first post in a series to provide step-by-step instructions on how to setup and configure a SQL Server 2012 AlwaysOn in a dual data-center Windows clustered environment using shared storage in multiple subnets.

When I say, step-by-step, what I mean is that I will cover the steps which are specific to using AlwaysOn in a dual data-center Windows clustered environment using shared storage in multiple subnets. This does not include generic setup procedures for Windows Failover Clusters and installation of SQL Server Failover Cluster Instances. The basics will be touched on with references but only the steps specific to this architecture and AlwaysOn will be presented in detail.


There is a large amount of information already out on the internet about AlwaysOn and I will attempt to link out to these sources as much as possible when it comes to background information and concepts of AlwaysOn. The scope of this blog series is to provide step-by-step instructions on how to setup a particular architecture (architecture explanation is below). I have chosen an architecture which includes Windows clustering (required for AlwaysOn), two data-centers, and a combination of AlwaysOn database level fail-over and Windows clustering instance level fail-over. The most desirable aspect of this architecture is the ability to use shared storage to minimize disk space usage but still provide multiple data-center capabilities. The steps provided here were tested only with the target architecture but in most cases apply to all OS and SQL versions that AlwaysOn is available to.

Terminology disclaimer: You see that I’ve used the word AlwaysOn a bunch of times already. AlwaysOn refers to both Windows Server Failover Clusters and SQL Server Availability Groups. In this architecture we are using both, which makes AlwaysOn acceptable but the features should be referenced individually whenever possible.


Target Architecture:


In this setup we are looking to produce two SQL Server Fail-over Cluster Instances (FCI), one in Data Center 1 and one in Data Center 2. Each data center has two physical nodes to the cluster but all four nodes are technically a member of the same Windows Fail-over Cluster (WCF).

Before we get into too much more detail I want to explain why I’m doing this. AlwaysOn has the ability to build this same architecture without using FCIs; instead it just uses normal standalone instances on each node but it produces a replica copy on each instance for up to four total replicas. There are two situations I can think of where you would want to use the FCIs as I’m targeting. The first is if you have a large cluster such as five nodes in Data Center 1 alone and you still want to add more to fail-over to Data Center 2. With a limit on replica copies you might constrain yourself to smaller clusters. The second scenario I can think of is if you have disk space limitations with very large databases (VLDB). This second reason was why I set this up at my organization, I had a 7 TB database that I didn’t want multiple copies of. I accepted the need for two copies (one in each data center) but I didn’t want to go any further than I had to while still maintaining automatic fail-over locally and manual fail-over cross data centers.

In addition, this step-by-step guide will cover the entire range of options that you could use to setup your AlwaysOn architecture. That makes this architecture a good tool for anyone.

Back to the details; below is a list of the hardware and software specs used during this demonstration.


  • Nodes A and B
    • 4 CPUs
    • 4 GBs of RAM
    • 60 GB local storage (C:)
    • 20 GB ISCSI SAN attached storage (E:)
    • Located in Data Center 1
  • Nodes C and D
    • 4 CPUs
    • 4 GBs of RAM
    • 60 GB local storage (C:)
    • 20 GB ISCSI SAN attached storage (also named E:)
    • Located in Data Center 2


  • Windows Server 2008 R2 Enterprise SP1 (for Windows Server 2012 R2 check out Perry Whittle‘s post here)
  • 64-bit
  • SQL Server 2012 Enterprise w/ CU2 (11.0.2325)


That should set the basis for what we will accomplish with this blog series. Follow the links below to move on through this tutorial.




16 responses to “How to: Configure SQL Server 2012 AlwaysOn – Part 1 of 7”

  1. […] you are using this blog post as a point of reference and not seeking our Target Architecture here is a brief description of the two […]

  2. […] level. This needs to be accomplished for each instance involved in the architecture. In our target architecture this will need to be accomplished once per node for the SQL Server Failover Cluster Instance (FCI) […]

  3. […] is also a 7-part explanation starting here from Derik Hammer at the SQL Hammer blog, which has a lot of explanation and […]

  4. […] Never fear! I am not abandoning my writing completely. I will, however, be spending much less time writing while my attention and time resources are spent in other areas of personal and professional development. I look forward to finding time to write again and hope to make time to post even if they aren’t as detailed as ones like How to: Configure SQL Server 2012 AlwaysOn. […]

  5. sanjoy Avatar

    Where you configured your quorum?

    1. Derik Hammer Avatar
      Derik Hammer

      Since I used Windows Server 2008 R2 for this architecture I had used node majority with a witness disk in data center 1 to make sure that a connection outage between data centers didn’t take down the cluster. Now that Windows Server 2012 and 2012 R2 is available there is less risk of losing quorum due to a re-sampling of live voters which changes the quorum calculation after nodes begin to go offline. Read more here ( Also read about the dynamic witness.

  6. anton Avatar

    how do you create dual subset availability group listener

    1. Derik Hammer Avatar

      Once you have your two dedicated IP addresses, one per subnet, you can add them to your listener. The wizard in SSMS can be used or T-SQL. Check out this BOL page for the steps.

      1. anton Avatar

        Let say I have three data centers. I have two node clusters each data center with own domain name and different IP address. And I can ping each other between all three data centers. Would you think I can create availability group and can I add all six nodes to my availability group? And can I create one listener for all six nodes? Or all the six nodes must stay in one domain name? I have one project that’s why I need your help.

        1. Derik Hammer Avatar

          Are you using SQL Server 2012 or 2014? 2014 would be required to have 6 replicas in your AG because in 2012 the limitation was 4 secondaries + 1 primary. Assuming that you are using 2014, you would only need 1 listener which services all of your replicas. The WSFC will move the listener resource around as your AG fails over.

          1. anton Avatar

            I am using sql server 2014 and windows 2012 r2. i need to create 5 availability group to separate different client databases. do i need to create each availability group one listener ?

  7. Deepak Rao Avatar

    Nice article. Is it possible to pen down behaviour of SSRS, SSIS, MDS in case of fail over. there is no direct support from microsoft. seems like we need to manually do all this.

    1. Derik Hammer Avatar

      Microsoft supports failover of MDS, Regarding SSIS, the service itself is not cluster aware but can be installed on each node of the cluster. Depending upon where you store your packages you have different options. The best option is to use the project deployment model with the SSISDB. See here,, for information on how to use SSISDB with Availability Groups, there is nothing special to do with just FCIs. Regarding SSRS, you can add the report catalog into FCIs and/or AGs but the web-front ends cannot be clustered. Instead you can achieve high-availability in Enterprise edition only. Each front-end would then reference the same report catalog which you have in your FCI / AG cluster. Those front-ends would be external to the cluster and you would likely layer a load balancer over the top.

Leave a Reply

%d bloggers like this: