I’m a huge fan of the Ola Hallengren maintenance scripts. They have incredible power and, when you’ve read the documentation, are easy to use. But one thing he can’t prepare the scripts for, is different settings for each database on the instance. Allow me to show you how I’ve fixed that.
Suppose I’ve got a instance that has different databases and these databases have different uses. Some are more or less business critical, other are more like repository or metadata databases. This means that the maintenance will differ between these databases. A metadata database doesn’t need a 5 minute log backup, maybe not even a daily full backup.
Let’s assume I’ve got ten databases on the same instance, named DB1 to DB10. Databases 1 to 4 are business critical, high maintenance and very important. Databases 5 and 6 are repository databases that contain the data for some application to run, databases 7 to 9 are metadata databases and database 10 is huge but a data warehouse.
Now, the Ola script set can work with a list of databases. So you can create jobs for each different combination. But it’s a static way of working and I like things to be dynamic. Because I don’t want to change all the jobs manually when database DB11 comes in.
Now, show me that solution.
So I started out with creating a configuration table where I register the database name and a set of different options:
create table OHManagement ( ID int identity(1,1) primary key, DATABASENAME varchar(100) not null, DAILYINDEX BIT DEFAULT 0, WEEKINDEX BIT DEFAULT 0, DAILYFULLBACKUP BIT DEFAULT 0, WEEKFULLBACKUP BIT DEFAULT 0, DAILYINCREMENTAL BIT DEFAULT 0, LOGBACKUP BIT DEFAULT 0, STATISTICSDAILY BIT DEFAULT 0, STATISTICSWEEKLY BIT DEFAULT 0, INTEGRITYDAILY BIT DEFAULT 0, INTEGRITYWEEKLY BIT DEFAULT 0 )
This table starts out empty, obviously. So my next step is a procedure that merges data into this table. The procedure should be scheduled on a daily basis to add new databases and delete dropped or detached databases.
MERGE OHMANAGEMENT T USING (SELECT name from sys.databases) S on T.databasename = S.Name WHEN NOT MATCHED BY TARGET THEN INSERT (Databasename, dailyindex, weekindex, dailyfullbackup, weekfullbackup, dailyincremental, logbackup, statisticsdaily, statisticsweekly, integrityweekly, integritydaily) values (name, 0,1,0,1,1,0,1,0,1,0);
This is of course the starting point. You can change the default values to fit your needs, add the code that drops the databases that are no longer there. Something else you can do is add a condition for full, bulk logged and simple recovery models. Depending on the recovery model, you can change the backups accordingly. Or change them manually in the table.
Yes, I’m using the merge statement and there’s no harm in using it here. Now run this script once to populate the table. Then paste the script into a job and schedule it to run at least once a day, some time before your maintenance window starts. If you add a fully logged database, run it at once to prevent unhappy moments with your log file.
Let’s say I want to rebuild my statistics. The query to do this looks like this:
DECLARE @DatabaseList varchar(max); SET @DatabaseList = ( SELECT Name = STUFF( (SELECT DISTINCT ',' + DATABASENAME FROM dbo.OHManagement where STATISTICSDAILY = 1 FOR XML PATH('') ), 1, 1, '' ) ) IF(@DatabaseList is null) print 'No database selected.' ELSE EXECUTE master. dbo.IndexOptimize @Databases =@DatabaseList, @FragmentationLow = NULL, @FragmentationMedium = NULL, @FragmentationHigh = NULL, @UpdateStatistics = 'ALL', --@OnlyModifiedStatistics = 'Y' @StatisticsModificationLevel = 1
Let’s walk through this step by step.
First I’m declaring a local variable to store the database names.
Next I’m filling this variable with the names of the databases that are eligible for the statistics update. If no database qualifies, it just prints that no databases qualify. You should see that in the log and take action from there.
Because the procedure from Ola can work with lists, I can offer it a list in the shape of a variable. It will pick it up and do it’s business.
For each type of maintenance (backup, indexes, statistics and integrity check) you can create the job, let it read from the controlling table and do it’s thing. Each maintenance job is configured to run at the time you need to perform that sort of maintenance.
No matter how wonderful Azure SQL Databases are, they won’t rebuild their indexes themselves or refresh the statistics automagically. You can’t do back-ups or integrity checks, they will throw nasty errors your way. But the index and statistics scripts work quite nicely. Remember that you need to place the procedure in each individual database because cross database query’s don’t work. And then enter the struggles of triggering the jobs. Some kind of automation might work, but you could look at Elastic Query. A way of doing some sort of cross database querying. Then again, if you deny access from Azure resources to your databases, these will fail. If you have a brilliant solution, post it in the comments :).
Do I trust that the Sql Agent will play along nicely and do the work? No, because I don’t trust any tooling. This has nothing to do with the quality, but everything with external influences I have no control over. Think about Windows updates. All of a sudden the updates kick in and reboot the server. If this interferes with the job trigger, you’ll miss a run. So I need to know if the jobs play along nicely. I still like the solution by Rob Sewell and friends called Dba-Checks (part of Dba-tools); a powershell module with all kinds of checks built in, configurable and even with a dashboard. You can also write your own T-Sql code and log the results in a tool like Zabbix. Or you can buy monitoring software from Redgate for instance. There are enough options, but make sure you monitor in some way.
Now, it’s up to you 🙂
Thanks for reading!
5 thoughts on “Dynamic SQL Maintenance Scripts”
I’m slowly working through some code for SQL Elastic Jobs for Azure. Of course, to run Ola’s stuff, that requires putting that code into each database. Along the way, I also found out that the CommandExecute and CommandLog objects _really_ need to be in the “dbo” schema, though you can adjust IndexOptimize to be in a different schema by tweaking that script slightly. Hardest part there is making sure you can get the correct “master” and “job” users into each database – and adjusting their permissions so they can create/run the Ola scripts. Db_Owner works, but trying to scale that back a bit.
Slowly going to work up a post or two about that – just not quite ready to publish.
Runbook using, a PowerShell script to execute stored proc on the DB. The problem comes when after 3 hours, it times out. I need a workaround this
Hi Deo. Is the stored proc running for three hours?
It looks like there is a 3-hour limit timeout, I also read this online. Yes depending on the index frag or update stats on the DB the script might need to run for 3-5 hours.
I think I’d look into why the job takes this long to complete (locking, number of indexes etc) and either try and fix that or split the maintenance up into parts to get you within the three hour window.