Database Deployment Work Flow with SQL Server Data Tools (SSDT)

Happy April Fools Day everyone! I hope that you do not spend too much time reading completely false blog posts today. I, for one, promise that this post is not an April Fools Day joke. Feel free to read on without fear of frustration.


Today I wanted to talk about SQL Server Data Tools, specifically related to database, or dacpac, deployments. For those of you who do not follow me on Twitter, I am transitioning over to a new company in two weeks. Before I get to my point, jump on Twitter and follow me, @SQLHammer ;-). Anyways, I have been doing a lot of knowledge transfer and documenting lessons learned for my current company and SSDT project configurations is one of the hot topics that I’ve been asked to cover. As I formed my thoughts and prepared for various meetings and documentation sessions, I realized that SSDT holds two major areas where improvement is needed. The first is dealing with security and server level objects when dealing with multiple different environments such as development, test, and production. The second is how migration between versions of the database schema can occur. We are going to focus on that second area for the rest of this post.

SSDT does a pretty good job at making database changes declarative. You declare the desired state in the project, build a dacpac, and then SSDT will compare your dacpac to your database and generate the necessary scripts to move it from the current version to the version of the dacpac. While SSDT does a pretty good job, as I said, you are still required to build a plan of action for migrating between versions where a compare tool falls short.

Let us say, for example, that you want to rename a column. Simply renaming the column in the database project will work and you will be able to deploy. Next thing you know, SSDT has dropped your column and created a new column instead of renaming. You didn’t want to keep all the data in that column did you? Another example is when you want to add a NOT NULL column to a table with existing data. Maybe you want to pre-populate the column but you don’t want to use a default constraint because the data is driven from other tables where you will need to use a custom query instead. You cannot accomplish this goal with a single deployment of SSDT by itself. You would have to choose between running a custom script in the post-deployment and then making the project change later, or possibly make the table change in the project and execute a script before, and external to, the dacpac to handle the column creation. To be fair, this type of problem seems inevitable with a comparison tool because it would be a challenge for the tool to know how you needed to handle these scenarios.

This brings us back to our problem, migration between versions of the database schema which cannot be handled by the compare. Microsoft has provided us the ability to execute a single pre-deploy and a single post-deploy script during the dacpac deployment. Technically, you can deploy as many scripts as you wish by using the SQLCMD :r syntax but that is irrelevant for now. With these steps on either end of the work flow, we can easily add custom scripts to handle migration between versions and writing them in a safely re-runnable way is also fairly easy.

So what is the problem? The problem is that the pre-deploy script is not truly a “pre“-deploy. Let’s take a look at the work flow.

SSDT Work Flow

As can be seen above, the compare and generation of the deployment script actually occurs before the pre-deploy script is executed. In our scenario where we wanted to add a column and pre-populate, we could not put the script into the pre-deploy because; first a script with the add column would be generated, then the pre-deploy would create the column and populate, then the generated script would execute and throw and exception for the column already existing. It seems to me that the pre-deploy should truly execute before the deployment which includes the generation of the deployment script. That way, the generated script would be derived from the state of the database after the pre-deploy is complete. I do not know this for sure but I believe that the purpose behind this work flow is to validate that the compare and generation can be successful before any code is executed and so that features of SSDT which only provide a report of what would be done can be successful.

For this reason, we run a custom deployment step which calls N number of T-SQL scripts to handle our pre-deploy migration script needs.

SSDT Work Flow 2

This article has 10 comments

  1. Hey Derik, to rename a column use refactoring (click on the column the ctrl+r+r) this will generate a sp_rename on the column name and will not drop it.

    This is one of the real differentiators between SSDT and standard compare tools like the redgate or apex ones.

    The second point is right, it is difficult for a tool to generate a single output script that also allows for changes that have not happened yet (think about generating a script for later deployment, if the pre-deploy script will not happen until later how do you compare what it would be like??) – I think your approach of pre-running scripts would work well.

    SSDT is a great tool and there are plenty of options, using deployment contributors and such gives you the ability to modify the generated script so if you wanted to add or remove some steps, you can easily.

    ed

  2. When renaming a column in SSDT, right-click the column, choose refactor->rename. That way SSDT updates its internal refactorLog, which will actually distinct between a rename vs. drop & re-create.

    Personally, since table re-creation may occur on different scenarios, I have created a tool that analyzes the publish script and detects the actual size of table re-creation. Should that exceed a configured limit – the deploy stops.

  3. Thank you Ed and Adi. I was unaware of the ability to tell SSDT to do a rename. Just goes to show that you can learn something new every day.

    As you both seem to have agreed, I find the pre-step very useful and definitely a requirement for the deployment process that my current organization uses.

  4. Something that might be useful here: we developed a way to script SQL Server databases in a way that can safely run on any existing database, and modify your schema\data to however the database you’ve scripted was. (or just tell you the differences between your current database and what you’ve scripted) its pretty useful. our tool is free:

    http://nobhillsoft.com/diana.aspx

    (search in help under ‘context free scripting’)

  5. […] – Added to Curah PowerShell for the DBA Prevent and Log Certain SQL Server Login Attempts Database Deployment Work Flow with SQL Server Data Tools (SSDT) Proxy Accounts in SQL Server PowerShell SQL Server SMO Simplicity Series – 1 – Added to […]

  6. I was able to use this work-around for the fact that the the deployment script is generated before the pre-deployment script is executed:

    Make the schema changes for the affected table in the project.

    In the pre-deployment script, rename the table that has changes. Also, include statements to re-generate the original table schema.

    In the post-deployment script, include insert statements to add data from the original, renamed table (including new data) into the new table according to the new schema.

    During deployment, the empty table with the original schema is updated to the new schema.

    • Wow that sounds like a lot of work for the server to do. Also, it sounds like it would only work when you have outage windows to deploy in. The table(s) would be unusable during that process. Also, normally you’d want all of that in an explicit transaction to protect yourself. You won’t be able to span a transaction over the pre-deploy, deploy, and post-deploy steps.

      • Points taken, but you can’t wrap a transaction around your custom script AND the SSDT deployment.

        Also, my example is the reverse of what the generated SSDT change script does*, which is create a new temporary table with the new schema, insert records from the existing table, drop the existing table, and rename the temporary table to the original.

        One advantage here is that we can right-click and publish changes directly from Visual Studio during development, as well as during a CI deployment without modifications.

        Does your custom migration script manage your table schema changes, moving data, and include changes to all dependent objects (views, stored procedures)?

        In any event there is unwanted duplication in the script and the project files – ideally SSDT would include a pre-compare script.

        * I am changing a FK column to point to a different PK in a different table, and also moving columns from one table to another

  7. Like your idea @Derik.
    Can you provide some sample scripts which you put in those 2 POSH blocks?
    Thanks!

Leave a Reply

%d bloggers like this: