Each month, on the first Tuesday of the month, the announcement for the blog party T-SQL Tuesday comes out. Those that are interested then post their blogs, on the subject selected, on the second Tuesday of the month. If you’ve never heard of T-SQL Tuesday it’s a blog party started by Adam Machanic (b/t) over 6 years ago. Robert Davis (b/t), our host this month, has chosen a subject of Be The (data) Change. Specifically data changes.
As my contribution to #tsql2sday 74 I will be discussing continuous integration and when static data is not truly static.
Applications change over time. This is the nature of an application because the purpose of an application is to serve a need, usually a business need. People and their business’ needs change which provokes the application drift. That is why there is never truly static data. Hard-coded values are frowned upon and lookup or dictionary tables are common in databases.
The traditional way of handling lookup value changes would be to directly change them. This could be with an ad hoc script or even a planned release but it was typically a meat bag pressing F5 that kicked everything off. Over the years tooling for database development has taken major steps forward and now we are encouraged to put our databases in source control. Sadly, our tools are much more targeted toward scheme changes than data. A SQL Server Data Tools (SSDT) database project, such as, does not go very far towards helping us push data changes into our environments.
Here is what I like to do
One method that I find works well for keeping your lookup tables up-to-date is to create MERGE scripts for your developers to change in source control. Then you create a deployment script which calls the various MERGE statement files immediately before or after the scheme deployment of your database.
My lookup table
CREATE TABLE [Common].[DisbursementType] ( [disbursementTypeId] SMALLINT IDENTITY (1, 1) NOT NULL, [name] VARCHAR (50) NOT NULL, [description] VARCHAR (100) NULL, [isEnabled] BIT CONSTRAINT [DF_DisbursementType_isEnabled] DEFAULT ((1)) NOT NULL, [taxModelValue] VARCHAR (50) CONSTRAINT [DF_DisbursementType_taxModelValue] DEFAULT ('') NOT NULL, [rowCreated] DATETIME2 (7) CONSTRAINT [DF_DisbursementType_rowCreated] DEFAULT (sysutcdatetime()) NOT NULL, [rowCreatedUserId] UNIQUEIDENTIFIER NOT NULL, [rowLastUpdated] DATETIME2 (7) CONSTRAINT [DF_DisbursementType_rowLastUpdated] DEFAULT (sysutcdatetime()) NOT NULL, [rowLastUpdatedUserId] UNIQUEIDENTIFIER NOT NULL, CONSTRAINT [PK_DisbursementType] PRIMARY KEY CLUSTERED ([disbursementTypeId] ASC) ON [Common] );
My MERGE command
--:SETVAR userAdmin '00000000-0000-0000-0000-000000000000' --:SETVAR dtm SYSUTCDATETIME() SET IDENTITY_INSERT Common.DisbursementType ON MERGE INTO Common.DisbursementType AS [target] USING (VALUES (1, 'Direct Deposit', 'Direct Deposit Refund', 1, 'DirectDeposit'), (2, 'Mail Me A Check', 'Mail Me A Refund Check', 1, 'RefundCheck'), (3, 'Mail Me A Debit Card', 'Mail Me A Debit For Refund', 1, 'RefundDebitCard'), (4, 'Direct Withdrawal', 'Direct Withdrawal For Payment', 1, 'DirectWithdrawal'), (5, 'Mail Check', 'Mail Check For Payment', 1, 'SendCheck'), (6, 'Credit Card', 'Credit Card For Payment', 1, 'CreditCard') ) AS [source] (disbursementTypeId, name, [description], isEnabled, taxModelValue) ON [target].disbursementTypeId = [source].disbursementTypeId WHEN MATCHED THEN UPDATE SET name = [source].name, [description] = [source].[description], isEnabled = [source].isEnabled, taxModelValue = [source].taxModelValue, rowLastUpdatedUserId = $(userAdmin), rowLastUpdated = $(dtm) WHEN NOT MATCHED BY TARGET THEN INSERT (disbursementTypeId, name, [description], isEnabled, taxModelValue, rowCreated, rowCreatedUserId, rowLastUpdated, rowLastUpdatedUserId) VALUES (disbursementTypeId, name, [description], isEnabled, taxModelValue, $(dtm), $(userAdmin), $(dtm), $(userAdmin)) WHEN NOT MATCHED BY SOURCE THEN DELETE; set IDENTITY_INSERT Common.DisbursementType OFF
NOTE: You will see some SQLCMD variables in this script. That is because I modified this from a script that I using in an SSDT database project and the deployment process used SQLCMD mode. Those can be removed, if desired.
This method is very simple to use because you only ever have to edit the VALUES section of the script when changing your lookup values. You control the exact identity values so that they match in all environments, and deploy using a desired state mentality.