Some Pester Tests for SQL Defaults

When I was at PowerShell Conference EU in Hannover last month (The videos are available now – click here and the slides and code here) I found out about Irwin Strachans Active Directory Operations Test which got me thinking.

I decided to do the same for my usual SQL Set-up. Treating all of your servers to the same defaults makes it even easier to manage at scale remotely.

I am comfortable with using SMO to gather and change properties on SQL Instances so I started by doing this

        It 'Should have a default Backup Directory of F:\SQLBACKUP\BACKUPS' {
$Scriptblock = {
[void][reflection.assembly]::LoadWithPartialName('Microsoft.SqlServer.Smo');
$srv = New-Object Microsoft.SqlServer.Management.Smo.Server .
return $srv.BackupDirectory}
$State = Invoke-Command -ComputerName ROB-SURFACEBOOK -ScriptBlock $Scriptblock
$State |Should Be 'F:\SQLBACKUP\BACKUPS'

This is the how to find the properties that you want

  ## Load the Assemblies
[void][reflection.assembly]::LoadWithPartialName('Microsoft.SqlServer.Smo');
## Create a Server SMO object
$srv = New-Object Microsoft.SqlServer.Management.Smo.Server SERVERNAME

## Explore it
$srv|gm

## If you find an array pick the first one and expand and then explore that
$srv.Databases[0] | select *
$srv.Databases[0] | gm

I quickly found as I added more tests that it was taking a long time to perform the tests (about 5 seconds each test) and that it took an age to fail each of the tests if the server name was incorrect or the server unavailable.

I fixed the first one by testing with a ping before running the tests

   ## Check for connectivity
if((Test-Connection $Server -count 1 -Quiet) -eq $false){
Write-Error 'Could not connect to $Server'
$_
continue
}

The continue is there because I wanted to loop through an array of servers

I improved the performance using a remote session and a custom object

      Describe "$Server" {
BeforeAll {
$Scriptblock = {
[pscustomobject]$Return = @{}
$srv = ''
$SQLAdmins = $Using:SQLAdmins
[void][reflection.assembly]::LoadWithPartialName('Microsoft.SqlServer.Smo');
$srv = New-Object Microsoft.SQLServer.Management.SMO.Server $Server
$Return.DBAAdminDb = $Srv.Databases.Name.Contains('DBA-Admin')
$Logins = $srv.Logins.Where{$_.IsSystemObject -eq $false}.Name
$Return.SQLAdmins = @(Compare-Object $Logins $SQLAdmins -SyncWindow 0).Length - $Logins.count -eq $SQLAdmins.Count
$SysAdmins = $Srv.Roles['sysadmin'].EnumMemberNames()
$Return.SQLAdmin = @(Compare-Object $SysAdmins $SQLAdmins -SyncWindow 0).Length - $SysAdmins.count -eq $SQLAdmins.Count
$Return.BackupDirectory = $srv.BackupDirectory
$Return.DataDirectory = $srv.DefaultFile

The BeforeAll script block is run, as it sounds like it should, once before all of the tests, BeforeEach would run once before each of the tests. I define an empty custom object and then create an SMO object and add the properties I am interested in testing to it. I then return the custom object at the end

   $Return.Alerts82345Exist = ($srv.JobServer.Alerts |Where {$_.Messageid -eq 823 -or $_.Messageid -eq 824 -or $_.Messageid -eq 825}).Count
$Return.Alerts82345Enabled = ($srv.JobServer.Alerts |Where {$_.Messageid -eq 823 -or $_.Messageid -eq 824 -or $_.Messageid -eq 825 -and $_.IsEnabled -eq $true}).Count
$Return.SysDatabasesFullBackupToday = $srv.Databases.Where{$_.IsSystemObject -eq $true -and $_.Name -ne 'tempdb' -and $_.LastBackupDate -lt (Get-Date).AddDays(-1)}.Count
Return $Return
}
try {
$Return = Invoke-Command -ScriptBlock $Scriptblock -ComputerName $Server -ErrorAction Stop
}
catch {
Write-Error "Unable to Connect to $Server"
$Error
continue

I was then able to test against the property of the custom object

   It 'Should have Alerts for Severity 20 and above' {
$Return.Alerts20SeverityPlusExist | Should Be 6
}
It 'Severity 20 and above Alerts should be enabled' {
$Return.Alerts20SeverityPlusEnabled | Should Be 6
}
It 'Should have alerts for 823,824 and 825' {
$Return.Alerts82345Exist |Should Be 3
}
It 'Alerts for 823,824 and 825 should be enebled' {
$Return.Alerts82345Enabled |Should Be 3
}

Occasionally, for reasons I haven’t explored I had to test against the value property of the returned object

          It "The Full User Database Backup should be scheduled Weekly $OlaUserFullSchedule" {
$Return.OlaUserFullSchedule.value | Should Be $OlaUserFullSchedule
}

I wanted to be able to run the tests against environments or groups of servers with different default values so I parameterised the Test Results as well and then the logical step was to turn it into a function and then I could do some parameter splatting. This also gives me the opportunity to show all of the things that I am currently giving parameters to the test for

   $Parms = @{
Servers = 'SQLServer1','SQLServer2','SQLServer3';
SQLAdmins = 'THEBEARD\Rob','THEBEARD\SQLDBAsAlsoWithBeards';
BackupDirectory = 'C:\MSSQL\Backup';
DataDirectory = 'C:\MSSQL\Data\';
LogDirectory = 'C:\MSSQL\Logs\';
MaxMemMb = '4096';
Collation = 'Latin1_General_CI_AS';
TempFiles = 4 ;
OlaSysFullFrequency = 'Daily';
OlaSysFullStartTime = '21:00:00';
OlaUserFullSchedule = 'Weekly';
OlaUserFullFrequency = 1 ;## 1 for Sunday
OlaUserFullStartTime = '22:00:00';
OlaUserDiffSchedule = 'Weekly';
OlaUserDiffFrequency = 126; ## 126 for every day except Sunday
OlaUserDiffStartTime = '22:00:00';
OlaUserLogSubDayInterval = 15;
OlaUserLoginterval = 'Minute';
HasSPBlitz = $true;
HasSPBlitzCache = $True;
HasSPBlitzIndex = $True;
HasSPAskBrent = $true;
HASSPBlitzTrace =  $true;
HasSPWhoisActive = $true;
LogWhoIsActiveToTable = $true;
LogSPBlitzToTable = $true;
LogSPBlitzToTableEnabled = $true;
LogSPBlitzToTableScheduled = $true;
LogSPBlitzToTableSchedule = 'Weekly';
LogSPBlitzToTableFrequency = 2 ; # 2 means Monday
LogSPBlitzToTableStartTime  = '03:00:00'}

Test-SQLDefault @Parms

I have some other tests which always return what I want, particularly the firewall rules which you will have to modify to suit your own environment

To be able to run this you will need to have the Pester Module. If you are using Windows 10 then it is installed by default, if not

  Find-Module –Name 'Pester' | Install-Module

You can find more about Pester here and here and also these videos from the conference
You can find the tests on GitHub here and I will continue to add to the defaults that I check.
This is not a replacement for other SQL configuration tools such as PBM but it is a nice simple way of giving a report on the current status of a SQL installation either at a particular point in time when something is wrong or after an installation prior to passing the server over to another team or into service

.

Enterprise Strategies – A #TSQL2sDay post

This months TSQL2sDay blog post party is hosted by Jen McCown and is about Enterprise Strategy.

Adam Mechanic started TSQL Tuesdays over 5 years ago and you will find many brilliant posts under that heading if you search for them

Managing SQL servers at enterprise scale is not a straightforward task. Your aim as a DBA should be to simplify it as much as possible and to automate everything that you possibly can. This post by John Sansom could have been written for this months party and I recommend that you read it.

So here are a few points that I think you should consider if you look after SQL in an Enterprise environment.

  • Enterprise Strategy will undoubtedly garner a whole host of excellent posts and Jen will provide a round up post which will I am certain will be an excellent resource. Take a look here
  • Know where your instances are and have a single place that you can reference them from. Some people recommend a Central Management Server but I find this too restrictive for my needs. I use an InstanceList table in my DBA Database with the following columns [ServerName], [InstanceName] , [Port] , [AG] , [Inactive] , [Environment] and [Location]. This enables me to target instances not just by name but by environment (Dev, Test, Pre-Prod, Live etc), by location or by joining the InstanceList table with another table I can target by the application or any number of other factors. I also capture information about the servers at windows and SQL level to this database so I can target the SQL 2012 servers specifically if need be or any other metric. This is very powerful and enables far greater flexibility than the CMS in my opinion.
  • Use PowerShell (no surprise I would mention this!) PowerShell is a brilliant tool for automation and I use it all of the time
  • Get used to using this piece of Powershell code
	 $Query = @"
	 SELECT [ServerName],[InstanceName],[Port]
	  FROM [DBADatabase].[dbo].[InstanceList]
	  Where Inactive = 0 AND NotContactable = 0
	"@
	try{
	$AlltheServers= Invoke-Sqlcmd -ServerInstance $CentralDBAServer -Database $CentralDatabaseName -Query $query
	$ServerNames = $AlltheServers| Select ServerName,InstanceName,Port
	}
	foreach ($ServerName in $ServerNames)
	{
	## $ServerName
	 $InstanceName =  $ServerName|Select InstanceName -ExpandProperty InstanceName
	 $Port = $ServerName| Select Port -ExpandProperty Port
	$ServerName = $ServerName|Select ServerName -ExpandProperty ServerName 
	 $Connection = $ServerName + '\' + $InstanceName + ',' + $Port
	
	 try
	 {
	 $srv = New-Object ('Microsoft.SqlServer.Management.Smo.Server') $Connection

Notice the query variable above, this is where the power lies as it enables you to gather all the instances that you need for your task as described in the bullet post above. Once you get used to doing this you can do things like this identify all the instances with Remote DAC disabled using a query against the DBA Database and then enable it on all servers by adding this code to the loop above

$srv.RemoteDacEnabled = $true
$srv.alter()

Very quick very simple and very very powerful. You can also use this to run TSQL scripts against the instances you target but there are some added complications with Invoke-SQLCmd that you need to be aware of

  • BE CAREFUL. Test and understand and test before you run any script on a live system especially using a script like this which enables you to target ALL of your servers. You must definitely check that your $ServerNames array contains only the instances you need before you make any changes. You need to be ultra-cautious when it is possible to do great damage
  • Write scripts that are robust and handle errors gracefully. I use Jason Wasser @wasserja Write-Log function to write to a text file and wrap my commands in a try catch block.
  • Include comments in your scripts to assist either the future you or the folks in your position in 5 years time. I would also add one of my bug bears – Use the description block in Agent Jobs. The first place any DBA is going to go to when that job fails is to open the properties of the job. Please fill in that block so that anyone troubleshooting knows some information about what the job does or at the very least a link to some documentation about it
  • Finally in my list, don’t overdo the alerts. Alerting is vital for any DBA it is a brilliant way to ensure that you quickly know about any issues affecting your estate but all alerts should be actionable and in some cases you can automate the action that you can take but the message here is don’t send messages to the DBA team email for every single tiny thing or they will get swamped and ignore the vital one. This holds for whichever alerting or monitoring system that you use

This is but a small sub-section of things that you need to consider when responsible for a large SQL estate but if you need help and advice or just moral support and you don’t already interact with the SQL community then make today the day you start. Maybe this post by Thomas La Rock is a good place to start or your nearest User Group/Chapter or the #sqlfamily hashtag or give me a shout and I will gladly help.