Do not use percentage based auto-growth

Having autogrowth enabled on your SQL Server instance is great! Even in IT shops where you diligently right-size your database files (which is a good idea), autogrowth is the perfect emergency feature. It will increase the size of your files, as needed, until you run into an external blocker such as limited disk space. What is a bad idea, however, is leaving SQL Server’s default setting alone. When your database is created it will begin with settings derived from the model database, unless you pass in the appropriate, non-default, options in your CREATE DATABASE command.

model-defaults

The model database is very small with an initial size of 4 MB total. It also has a very small data file autogrowth setting, 1 MB. For database that is 3 MB, 1 MB might be great (a 33% increase) but that size database is far from typical. Finally we get to the log file autogrowth setting which is set to 10%. For all SQL Servers everywhere, I recommend changing this setting. Not only should you change it for existing databases but you should change it in the model database, as well, to prevent future databases being created like this.

But what is so wrong with a percentage? Doesn’t that mean it will scale with my database?

File initialization

The easiest problem to mitigate for data files but impossible to change for log files is file initialization. Unless you grant SQL Server the proper permissions, when you create or extend a data file, the file must be zero-initialized. This means that, for a 2 GB extension, the database must write 2 GB worth of zeros across your disk. If waiting for 2 GB of data to be written out sounds slow to you, you are right. Making SQL Server halt the user’s write operations while waiting for this change to occur is a significant performance issue. With that being said, a 10% autogrowth doesn’t scale very well when your database is over 20 GB or so.

For data files, you can grant the necessary Windows permissions to perform instant file initialization. Instant file initialization enables SQL Server to skip the step of zero-initializing the file. It simply allocates new space and moves on. This is very fast, however, transaction log files cannot utilize this feature. Your logs will always be zero-initialized. This makes the model database’s default settings even worse because the feature which does not scale well is on the one type of file that cannot benefit from instant file initialization.

VLF counts

VLFs, or virtual log files, exist as part of your transaction log file. When you create or extend your log file, SQL Server will make VLFs based on the below algorithm.

vlf-increments

Image credit: sqlperformance.com

As an example, if you create a 1 GB database, you will have 8 VLFs each 128 MB in size. If you then autogrow by 10% increments, you will add 8 more VLFs each time until that 10% grows larger than 1 GB, then it will produce 16 VLFs. For each one of those growth operations, your VLF sizes will vary.

Percentage based autogrowth settings can easily translate into performance problems. A large quantity of small autogrow events can directly impact the performance of your DML statements because all log activity needs to halt for the growth (one more reason to right-size the file during a maintenance window). Check out this article for DML performance graphs. According to Microsoft, with high VLF counts you might encounter the following problems:

  1. One or more databases take a very long time to finish the recovery during SQL Server startup
  2. When you perform restore of a database, it takes a very long time to complete
  3. Attempts to attach a database takes a very long time to finish
  4. When you attempt to setup database mirroring you encounter error messages 1413, 1443 and 1479 indicating a timeout.
  5. You encounter memory related errors like 701 when you attempt to restore a database

Reference: support.microsoft.com

General guidance is to have less than 1,000 VLFs and to grow them in increments no larger than 8 GB (never 4 GB, see here) to keep each VLF below 512 MB.

The key to hitting your appropriate VLF count is by growing the file in static increments that make sense for your work load and to right-size your files whenever possible. During right-sizing maintenance windows, make sure to loop through and grow in the appropriate increments, don’t just set your final size and execute or you might end up with too few VLFs because each one is extremely large.

Additional references:


Posted

in

,

by

Tags:

Comments

One response to “Do not use percentage based auto-growth”

  1. […] Do not use percentage based auto-growth Data Mining Algorithms – Pluralsight Course Encrypting Existing Data with Always Encrypted SQL Server Audit: Getting Started When Emotional Intelligence Trumps Math Grant’s Fly-by-night Leadership Course – The Plan Code School Regular Expressions Get Microsoft Edge working for you… quickly! Power BI Desktop (Designer) vs. Excel: Rematch Power BI v2 is GA SQL SERVER – DevOps for the DBA – Notes from the Field #091 SQL Server Archive and Data Retention Considerations Download tSQLt Today and Start Unit Testing SQL Server 2016 Video Pills Your Guide to the New Power BI Desktop Visualize the timeline of your SQL jobs using Google graph and email Friday Fun: Tickle Me PowerShell! Measuring Folders with PowerShell One More Time One of My Favorite PowerShell Functions My SQL Server Service Account Philosophy Permissions Posters The pain of Forced Parameterization in SQL Server REDGATE TOOLS – ENABLING CONTINUOUS DELIVERY FOR DATABASE CHANGES WITH OCTOPUS DEPLOY AND TEAMCITY Excel to Power BI – Webinar Recording SQL Server Infernals – Circle 5: Inconsistent Baptists Creating Clustered Index on a Table Variable column in MS SQL Server Paging in SQL Server 2012 […]

Leave a Reply

%d bloggers like this: