Building Azure SQL Db with Terraform with Visual Studio Code

I have been using Terraform for the last week or so to create some infrastructure and decided to bring that knowledge back to a problem that I and others suffer from – building environments for presentations, all for the sake of doing some learning.

What is Terraform?

According to the website


HashiCorp Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned


https://www.terraform.io/

This means that I can define my infrastructure as code. If I can do that then I can reliably do the same thing again and again, at work to create environments that have the same configuration or outside of work to repeatedly build the environment I need.

Building an Azure SQL Database with Terraform

To understand how to build a thing the best place to start is the documentation https://www.terraform.io/docs . For an Azure SQL Db in the docs you will find a block of code that looks like this

If you read the code, you can see that there are key value pairs defining information about the resource that is being created. Anything inside a ${} is a dynamic reference. So

refers to the name property in the azure_resource_group block called test (or the name of the resource group 🙂 )

Infrastructure As Code

So I can put that code into a file (name it main.tf) and alter it with the values and “run Terraform” and what I want will be created. Lets take it a step further though because I want to be able to reuse this code. Instead of hard-coding all of the values I am going to use variables. I can do this by creating another file called variables.tf which looks like

and my main.tf then looks like this.

You can find these files in my GitHub Repository here.

Alright – deploy something

To deploy the code that I have written I need to download Terraform from https://www.terraform.io/downloads.html and then extract the exe to a folder in my PATH. (I chose C:\Windows). Then in Visual Studio Code I installed two extensions The Terraform Extension by Mikael Olenfalk which enables syntax highlighting and auto-completion for the tf files and the Azure Terraform extension. You will need also need Node.js from here.

With those in place I navigated to the directory holding my files in Visual Studio Code and pressed F1 and started typing azure terraform and chose Azure Terraform Init

I was then prompted to use Cloud Shell and a browser opened to login. Once I had logged in I waited until I saw this

I press F1 again and this time choose Azure Terraform plan. This is going to show me what Terraform is going to do if it applies this configuration.

You can see the what is going to be created. It is going to create 3 things

Once you have checked that the plan is what you want, press F1 again and choose Azure Terraform Apply

You are then asked to confirm that this is what you want. Only “yes” will be accepted. Then you will see the infrastructure being created

and a minute later

and Jeremy exists in the beardrules resource group

Then once I have finished with using the sqlinstance. I can press F1 again and choose Azure Terraform Destroy. Again there is a confirmation required.

and you will see the progress for 46 seconds

and all of the resources have gone.

Thats a good start. This enables me to create resources quickly and easily and keep the configuration for them safely in source control and easy to use.

In my next post I will create an Azure DevOps pipeline to deploy an AZure SQL Db withTerraform.

The post after will show how to use Azure DevOps Task Groups to use the same build steps in multiple pipelines and build an Azure Linux SQL Server VM

The post after that will show how to use Azure DevOps templates to use the same build steps across many projects and build pipelines and will build a simple AKS cluster

Advertisements

Adding a Folder of Scripts to GitHub with Azure Data Studio

In my last post I showed how to add a folder of scripts to GitHub using Visual Studio Code.

You can do it with Azure Data Studio as well. It’s exactly the same steps!

The blog post could end here but read on for some screen shots 😉

Follow the previous post for details of setting up a new GitHub account

Create a repository in Github


Open the folder in Azure Data Studio with CTRL K CTRL O (Or File –> Open Folder)

Click on the Source Control icon or CTRL + SHIFT + G and then Initialize Repository

Choose the folder

Write a commit message

Say yes to the prompt. Press CTRL + ‘ to open the terminal

Navigate to the scripts folder. (I have a PSDrive set up to my Git folder)

and copy the code from the GitHub page after “…or push an existing repository from the command line”

and run it

and there are your scripts in GitHub

Make some changes to a script and it will go muddy brown

and then write a commit message. If you click on the file name in the scource control tab then you can see the changes that have been made, that are not currently tracked

Commit the change with CTRL + ENTER and then click the roundy-roundy icon (seriously anyone know its name ?) click yes on the prompt and your changes are in GitHub as well 🙂

Realistically, you can use the previous post to do this with Azure Data Studio as it is built on top of Visual Studio Code but I thought it was worth showing the steps in Azure Data Studio.

Happy Source Controlling

Adding a Folder of Scripts to GitHub

Yesterday there was a tweet from Allen White.

Allen wanted to add his scripts folder to source control but didn’t have a how to do it handy. So I thought I would write one. Hopefully this will enable someone new to GitHub and to source control get a folder of scripts under source control

GitHub account

If you do not have a GitHub account go to https://github.com and create a new account

There is a funky are you a human challenge

Then you can choose your subscription

Then answer some questions (Note – you probably want to choose different answers to the what are you interested in question! I’d suggest something technical)

You need to do the email verification

Next is a very important step – Please do not skip this. You should set up 2 factor authentication. Yes even if “It’s just for me there is nothing special here”

Click your user icon top right and then settings

Then click set up two factor authentication

and either set up with an app or via SMS (I suggest the app is better)

OK – Now you have your GitHub account set up. It should have taken you less time than reading this far.

Add a Scripts Folder to GitHub

OK, Now to add a folder of scripts to a repository. Here is my folder of scripts. They can be any type of files. I would recommend copy the folder to a specific Git folder.

Open VS Code – If you don’t have VS Code, download it from
https://code.visualstudio.com/ From the welcome window choose open folder


and open your scripts folder

In VS Code click the Source Control button


and up at the top you will see a little icon – initialise repository


Click that and choose your folder

Which will then show all of the changes to the repository (adding all the new files)

Now we need to add a commit message for our changes. I generally try to write commit messages that are the reason why the change has been made as the what has been changed is made easy to see in VS Code (as well as other source control GUI tools)

Click the tick or press CTRL + ENTER and this box will pop up


I never click Always, I click yes, so that I can check if I am committing the correct files. Now we have created a local repository for our scripts folder. Our next step is to publish it to GitHub

Create a New Repository in GitHub

In Github we need to create a remote repository. Click on the New Button. Give your repository a name and decide if you want it to be Public (available for anyone to search and find) or Private (only available to people you explicitly provide access to).

This will give you a page that looks like this

Copy the code after …or push an existing repository from the command line

and paste it into PowerShell in VS Code. Make sure that your prompt is at the root of your scripts folder.

Fill in your username and password and your 2FA

Then you will see a page like this

and if you refresh your GitHub page you will see

Congratulations, your code is source controlled 🙂

Making Changes

Now you can make a change to a file

Commit your change

Hit the roundy-roundy icon (anyone know its proper name ?)

Press OK and your commit will be pushed to Github 🙂

Yay – Source Control all the things

How to break a SQL 2019 container on my laptop

Just a very quick post today. At the weekend I blogged about creating SQL 2019 containers with named volumes enabling you to persist your data and yesterday about creating a random workload using PowerShell and a big T-SQL script.

The interesting thing about creating workload is that you can break things 🙂

When I created a SQL 2019 container with the data files mapped to a directory on my laptops C Drive with a docker-compose like this

restore the AdventureWorks database to use the /var/opt/sqlserver directory and run a workload after a while the container stops and when you examine the logs you find

I had a whole load of these errors

Then some of these

Then it went really bad

But that caused

Master eh? Now what will you do?

Interesting, then back to this.

It did all that again before

failing to capture it’s dump!! Oops 🙂

I had to recreate the containers without using the named volumes and then I could run my workload 🙂

Nothing particularly useful about this blog post other than an interesting look at the error log when things go wrong 🙂

Generating a Workload against AdventureWorks with PowerShell

For a later blog post I have been trying to generate some workload against an AdventureWorks database.

I found this excellent blog post by Pieter Vanhove t https://blogs.technet.microsoft.com/msftpietervanhove/2016/01/08/generate-workload-on-your-azure-sql-database/ which references this 2011 post by Jonathan Kehayias t
https://www.sqlskills.com/blogs/jonathan/the-adventureworks2008r2-books-online-random-workload-generator/

Both of these run a random query in a single thread so I thought I would use PoshRSJob by Boe Prox b | t to run multiple queries at the same time 🙂

To install PoshRSJob, like with any PowerShell module, you run

I downloaded AdventureWorksBOLWorkload zip from Pieters blog post and extracted to my C:\temp folder. I created a Invoke-RandomWorkload function which you can get from my functions repository in Github. The guts of the function are

which will created $NumberOfJobs jobs and then run $Throttle number of jobs in the background until they have all completed. Each job will run a random query from the query file using Invoke-SqlCmd. Why did I use Invoke-SqlCmd and not Invoke-DbaQuery from dbatools? dbatools creates runspaces in the background to help with logging and creating runspaces inside background jobs causes errors

Then I can run the function with

and create a random workload. Creating lots of background jobs takes resources so when I wanted to run a longer workload I created a loop.

You can get the function here. The full code is below

Persisting databases with named volumes on Windows with docker compose

With all things containers I refer to my good friend Andrew Pruski. Known as dbafromthecold on twitter he blogs at https://dbafromthecold.com

I was reading his latest blog post Using docker named volumes to persist databases in SQL Server and decided to give it a try.

His instructions worked perfectly and I thought I would try them using a docker-compose file as I like the ease of spinning up containers with them.

I created a docker-compose file like this which will map my backup folder on my Windows 10 laptop to a directory on the container and two more folders to the system folders on the container in the same way as Andrew has in his blog.

and then from the directory I ran

This will build the containers as defined in the docker-compose file. The -d runs the container in the background. This was the result.

UPDATE – 2019-03-27

I have no idea why, but today it has worked as expected using the above docker-compose file. I had tried this a couple of times, restarted docker and restarted my laptop and was consistently getting the results below – however today it has worked

So feel free to carry on reading, it’s a fun story and it shows how you can persist the databases in a new container but the above docker-compose has worked!

The command completed successfully but as you can see on the left the container is red because it is not running. (I am using the Docker Explorer extension for Visual Studio C

I inspected the logs from the container using

which returned

This is an evaluation version. There are [153] days left in the evaluation period.
This program has encountered a fatal error and cannot continue running at Tue Mar 26 19:40:35 20
19
The following diagnostic information is available:
Reason: 0x00000006 Status: 0x40000015 Message: Kernel bug check Address: 0x6b643120
Parameters: 0x10861f680
Stacktrace: 000000006b72d63f 000000006b64317b 000000006b6305ca
000000006b63ee02 000000006b72b83a 000000006b72a29d
000000006b769c02 000000006b881000 000000006b894000
000000006b89c000 0000000000000001
Process: 7 – sqlservr
Thread: 11 (application thread 0x4)
Instance Id: e01b154f-7986-42c6-ae13-c7d34b8b257d
Crash Id: 8cbb1c22-a8d6-4fad-bf8f-01c6aa5389b7
Build stamp: 0e53295d0e1704ae5b221538dd6e2322cd46134e0cc32be49c887ca84cdb8c10
Distribution: Ubuntu 16.04.6 LTS
Processors: 2
Total Memory: 4906205184 bytes
Timestamp: Tue Mar 26 19:40:35 2019
Ubuntu 16.04.6 LTS
Capturing core dump and information to /var/opt/mssql/log…
dmesg: read kernel buffer failed: Operation not permitted
No journal files were found.
No journal files were found.
Attempting to capture a dump with paldumper
WARNING: Capture attempt failure detected
Attempting to capture a filtered dump with paldumper
WARNING: Attempt to capture dump failed. Reference /var/opt/mssql/log/core.sqlservr.7.temp/log/
paldumper-debug.log for details
Attempting to capture a dump with gdb
WARNING: Unable to capture crash dump with GDB. You may need to
allow ptrace debugging, enable the CAP_SYS_PTRACE capability, or
run as root.

which told me that …………. it hadn’t worked. So I removed the containers with

I thought I would create the volumes ahead of time like Andrew’s blog had mentioned with

and then use the volume names in the docker-compose file mapped to the system folders in the container, this time the result was

ERROR: Named volume “mssqlsystem:/var/opt/sqlserver:rw” is used in service “2019-CTP23” but no declaration was found in the volumes section.

So that didnt work either 🙂

I decided to inspect the volume definition using

I can see the mountpoint is /var/lib/docker/volumes/mssqlsystem/_data so I decided to try a docker-compose like this

and then ran docker-compose up without the -d flag so that I could see all of the output

You can see in the output that the system database files are being moved. Thatlooks like it is working so I used CTRL + C to stop the container and return the terminal. I then ran docker-compose up -d and

I created a special database for Andrew.

I could then remove the container with

To make sure there is nothing up my sleeve I altered the docker-compose file to use a different name and port but kept the volume definitions the same.

I ran docker-compose up -d again and connected to the new container and lo and behold the container is still there

So after doing this, I have learned that to persist the databases and to use docker-compose files I had to map the volume to the mountpoint of the docker volume. Except I haven’t, I have learned that sometimes weird things happen with Docker on my laptop!!

Whats a SQL Notebook in Azure Data Studio?

Azure Data Studio is a cross-platform database tool for data professionals using the Microsoft family of on-premises and cloud data platforms on Windows, MacOS, and Linux.

Recently Vicky Harp tweeted

By the way, you can watch a recording from SQLBits of Vicky’s session


So in the interest of learning about something new I decided to give it a try.

Install The Insiders Edition

Unlike Visual Studio Code which has a link to the insiders download on the front page, you will have to visit the GitHub repository for the links to download the insiders release of Azure Data Studio. Scroll down and you will see

Try out the latest insiders build from master:

See the change log for additional details of what’s in this release.

Once you have installed you can connect to an instance, right click and choose New Notebook or you can use File – New Notebook

Incidentally, I use the docker-compose file here to create the containers and I map C:\MSSQL\BACKUP\KEEP on my local machine (where my backups are) to /var/opt/mssql/backups on the containers on lines 10 and 17 of the docker-compose so change as required . If you want to follow along then put the ValidationResults.bak in the folder on your local machine.
The Create-Ag.ps1 shows the code and creates an AG with dbatools. But I digress!

Install Notebook Dependencies

Once you click New Notebook you will get a prompt to install the dependencies.

It will show its output

and take a few minutes to run

It took all but 11 minutes on my machine

#

Create a Notebook

OK, so now that we have the dependencies installed we can create a notebook. I decided to use the ValidationResults database that I use for my dbachecks demos and describe here. I need to restore it from my local folder that I have mapped as a volume to my container. Of course, I use dbatools for this 🙂

I had already got a connection saved to the instance in Azure Data Studio, you may need to create a new one using the new connection icon at the top left and filling in the details. The password is in the code above.



Now I can start with my notebook. I am faced with this



I click on text and provide an intro


Once I had written that and clicked out, I couldn’t see what to do straight away!

Then I saw the code and text buttons at the top 🙂 Right, lets get on with it 🙂 I hit the code button and paste in the T-SQL to reset the dates in the database to simulate dbachecks having been run this morning.


There’s a run cell button on the right and when I press it

Cool 🙂

If the SQL query has results then they are shown as well

This is fun and I can see plenty of uses for it. Go and have a play with SQL notebooks 🙂

Source Control

I used CTRL K, CTRL O to open a folder and saved my notebook in my local Presentations folder which is source controlled. When I opened the explorer CTRL + SHIFT + E I can see that the folder and the file are colour coded green and have a U next to them marking them as Untracked. I can also see that the source control icon has a 1 for the number of files with changes and in the bottom left that I am in the master branch.

If I click on the source control icon (or CTRL + SHIFT + G) I can see the files with the changes and can enter a commit message

I then press CTRL + ENTER to commit my change and get this pop-up


As I only have one file and it has all the changes for this commit I click yes. If I had changed more than one file and only wanted to commit a single one at a time I would hover my mouse over the file and click the + to stage my change.



If I make a further change to the notebook and save it, I can see that the source control provider recognises the change but this time the folder the file is in and the file are colour coded brown with an M to show that they have been modified.

Unlike Visual Studio Code, when you then click on the source control icon and click on the change it does not show the differences in the notebook although this works with SQL files.

When I have made all my changes and committed them with good commit messages


I can see that there are 3 local changes ready to be pushed to by remote repository (GitHub in this case) and 0 remote commits in this branch by looking at the bottom left

I can click on the “roundy roundy” icon (I dont know its proper name 😊) and synchronise my changes. This comes with a pop-up

Personally I never press OK, Don’t Show Again because I like the double check and to think “Is this really what I want to do right now”. Once I press OK my changes will be synched with the remote repository. Explaining this means that you can find the notebook I have used in my Presentations GitHub Repository which means that you can run the Notebook too using the docker-compose file here and the instructions further up in the post.

#TSQL2sDay – NomNomNomNomNom

The topic for this months T-SQL Tuesday #112 hosted by Shane O’Neill (Blog / Twitter) is about “dipping into your cookie jar”. This reference means “when times get tough how do you dip into your reserves to keep going”. Shane asks the following:

That is what I want from the contributors of this T-SQL Tuesday, those memories that they can think back on for sustenance. Like the humble cookie, I want a humble brag.

Mmmm Cookies


Photo by Pille-Riin Priske on Unsplash

I’m not good at bragging, I’m generally convinced that all of you are better than me. Yes, I am aware that it is irrational. This has made writing this post really hard. Sure, I get immense pleasure and satisfaction from solving a problem, that’s a form of instant fulfillment. Certainly, I enjoy teaching people and passing over my knowledge for them to use.
I am not going to write about technical things that I have done because they don’t give me sustenance in that way.

So what does give me sustenance when times are hard?

People.

The things I am most proud of are the things other people do where I have played a small part. These are the things I look back at and help to energise me. Things like

  • A couple of people who I suggested started writing blogs and then speaking who are now seen as experts in their niche.
  • The people I mentored as new speakers who are now speaking all over the continent.

The most recent story was a DBA who sat in a full day pre-con at a SQL Saturday, took loads of notes and waited at the end to ask questions. We were looking at some code and she was telling me it wasn’t very good and apologising for it. It was good, it performed the required actions over a large estate and I told her so. I asked about her job and with a big sigh, she told a story of being stuck in a rut, dealing with a lot of legacy systems, not enjoying it and not being able to move on. We had a long talk.

Cut to this years SQL Bits and she came running up to me all energised. She has a new job, doing something “Cool in the cloud”, she said the things she had learned had helped her to land this role.

In all of these cases, it is the person involved who has done all of the hard work but it is these things that keep me going. The thank yous and the smiles I see on those peoples faces as they do the thing that they love and enjoy their success and progression 🙂

Cake !!!!!


Photo by Prince Abid on Unsplash

Hey, thats cake and not cookies Rob.

I know. The biggest thing that keeps me going when times are tough though is the security I am able to provide. Nearly 20 years ago my life was very different. Without a job, I’d had to give up a career, struggling dealing with my wife’s serious illnesses, suddenly responsible for the entire household without the means to provide, I was in a very bleak place and saw no way out.

So to have found a career that is my hobby, to be able to work and also to have fun, to have a social world that provides me with friends and entertainment in many countries and the opportunity to experience different cultures and still be able to live comfortably. Thats a blessing and what keeps me going.

Also being able to pay my dad back for turning up with sacks of potatoes by taking him to football matches and comedy shows 🙂

Acknowledge what you have got, tell your loved ones that you love them, enjoy life and use your cookies when you need them but don’t forget the cake 🙂

#DataInDevon – Getting up to speed with PowerShell or spend a day with one of four other MVPs :-)

Saturday 27th April is Global Azure Bootcamp day

What’s Global Azure Bootcamp?

The website says it best


…. communities will come together once again in the sixth great Global Azure Bootcamp event! Each user group will organize their own one day deep dive class on Azure the way they see fit and how it works for their members. The result is that thousands of people get to learn about Azure and join together online under the social hashtag #GlobalAzure!

Saturday Is Free Learning

I am a part of the team organising the event in Exeter. Now there is a little story here. We had chosen this date by chance to hold an event we call Data In Devon giving people in the South West (of UK) the chance to access a whole day of high quality data and technical sessions for free on a Saturday.

When the Global Azure Bootcamp was announced, we had a conversation with the organisers and they agreed that we could add Exeter as a venue as we had already decided to have a whole track dedicated to Azure. You can find our schedule here https://sqlsouthwest.co.uk/data-in-devon-saturday-schedule/ and you can register to attend via this form 

Now, we have some costs obviously, not a lot but venues are not free and neither is food 😉. We have a couple of sponsors (feel free to contact me if your company is interested in sponsoring the event) but we also have some paid training days on Friday 25th April.

Friday Is Training Day

It’s a great opportunity to get cheap high-quality training from some of the best in their areas of expertise. There are still some tickets for £175 and the price will rise only to £200. I think that £200 is fantastic value to be able to spend a day learning from

Alex Whittles – Data Platform MVP – Bi in Azure
John Martin – Data Platform MVP – Infrastructure as Code with Terraform
Terry McCann – Data Platform MVP – Machine Learning: From model to production using the cloud, containers and Dev Ops 
William Durkin – Data Platform MVP – Performance Pain Reduction for Data Platform Projects

and myself – Getting up to speed with PowerShell

You can sign up for any of these sessions by following the instructions here https://sqlsouthwest.co.uk/training-day-schedule/#Pricing We don’t have a fancy website or booking system as we wanted to keep costs down.

The details of my training day are below

Getting up to speed with PowerShellS

PowerShell is cross-platform, it works exactly the same on Windows, on Linux and Mac. It is awesome for automation and amazing for administration.

We will cover

  • the basics about PowerShell, PowerShell security
  • how to open PowerShell , how to install PowerShell .
  • 4 vital commands to enable you to be able to help yourself
  • The PowerShell Gallery and how to find, install and use additional modules
  • Reading the language
  • Working with output
  • Why Red text is a good thing and how to learn from the errors
  • We will even delve into scripting with PowerShell and how to validate your environment

There will also be the opportunity to learn about any areas of PowerShell, Automation, CI/CD that you have questions about. This is a beginner level session in which I will teach you to be comfortable with PowerShell and confident in being able to use it in the future

Attendees wanting to follow along should bring a laptop.

Using Docker to run Integration Tests for dbachecks

My wonderful friend André Kamman wrote a fantastic blog post this week SQL Server Container Instances via Cloudshell about how he uses containers in Azure to test code against different versions of SQL Server.

It reminded me that I do something very similar to test dbachecks code changes. I thought this might make a good blog post. I will talk through how I do this locally as I merge a PR from another great friend Cláudio Silva who has added agent job history checks.

GitHub PR VS Code Extension

I use the GitHub Pull Requests extension for VS Code to work with pull requests for dbachecks. This enables me to see all of the information about the Pull Request, merge it, review it, comment on it all from VS Code

I can also see which files have been changed and which changes have been made

Once I am ready to test the pull request I perform a checkout using the extension

This will update all of the files in my local repository with all of the changes in this pull request

You can see at the bottom left that the branch changes from development to the name of the PR.

Running The Unit Tests

The first thing that I do is to run the Unit Tests for the module. These will test that the code is following all of the guidelines that we require and that the tests are formatted in the correct way for the Power Bi to parse. I have blogged about this here and here and we use this Pester in our CI process in Azure DevOps which I described here.

I navigate to the root of the dbachecks repository on my local machine and run

and after about a minute

Thank you Cláudio, the code has passed the tests 😉

Running Some Integration Tests

The difference between Unit tests and Integration tests in a nutshell is that the Unit tests are testing that the code is doing what is expected without any other external influences whilst the Integration tests are checking that the code is doing what is expected when running on an actual environment. In this scenario we know that the code is doing what is expected but we want to check what it does when it runs against a SQL Server and even when it runs against multiple SQL Servers of different versions.

Multiple Versions of SQL Server

As I have described before my friend and former colleague Andrew Pruski b | t has many resources for running SQL in containers. This means that I can quickly and easily create fresh uncontaminated instances of SQL 2012, 2014, 2016 and 2017 really quickly.

I can create 4 instances of different versions of SQL in (a tad over) 1 minute. How about you?

Imagine how long it would take to run the installers for 4 versions of SQL and the pain you would have trying to uninstall them and make sure everything is ‘clean’. Even images that have been sysprep’d won’t be done in 1 minute.

Docker Compose Up ?

So what is this magic command that has enabled me to do this? docker compose uses a YAML file to define multi-container applications. This means that with a file called docker-compose.yml like thish

and in that directory just run

and 4 SQL containers are available to you. You can interact with them via SSMS if you wish with localhost comma PORTNUMBER. The port numbers in the above file are 15586, 15587,15588 and 15589

Now it must be noted, as I describe here that first I pulled the images to my laptop. The first time you run docker compose will take significantly longer if you haven’t pulled the images already (pulling the images will take quite a while depending on your broadband speed)

Credential

The next thing is to save a credential to make it easier to automate. I use the method described by my PowerShell friend Jaap Brasser here. I run this code

and then I can create a credential object using

Check The Connections

I ensure a clean session by removing the dbatools and dbachecks modules and then import the local version of dbachecks and set some variables

Now I can start to run my Integration tests. First reset the dbachecks configuration and set some configuration values

Then I will run the dbachecks connectivity checks and save the results to a variable without showing any output

I can then use Pester to check that dbachecks has worked as expected by testing if the failedcount property returned is 0.

What is the Unit Test for this PR?

Next I think about what we need to be testing for the this PR. The Unit tests will help us.

Choose some Integration Tests

This check is checking the Agent job history settings and the unit tests are

  • It “Passes Check Correctly with Maximum History Rows disabled (-1)”
  • It “Fails Check Correctly with Maximum History Rows disabled (-1) but configured value is 1000”
  • It “Passes Check Correctly with Maximum History Rows being 10000”
  • It “Fails Check Correctly with Maximum History Rows being less than 10000”
  • It “Passes Check Correctly with Maximum History Rows per job being 100”
  • It “Fails Check Correctly with Maximum History Rows per job being less than 100”

So we will check the same things on real actual SQL Servers. First though we need to start the SQL Server Agent as it is not started by default. We can do this as follows

Unfortunately, the agent service wont start in the SQL 2014 container so I cant run agent integration tests for that container but it’s better than no integration tests.

This is What We Will Test

So we want to test if the check will pass with default settings. In general, dbachecks will pass for default instance, agent or database settings values by default.

We also want the check to fail if the configured value for dbachecks is set to default but the value has been set on the instance.

We want the check to pass if the configured value for the dbachecks configuration is set and the instance (agent, database) setting matches it.

If You Are Doing Something More Than Once ……

Let’s automate that. We are going to be repeatedly running those three tests for each setting that we are running integration tests for. I have created 3 functions for this again checking that FailedCount or Passed Count is 0 depending on the test.

Now I can use those functions inside a loop in my Integration Pester Test

Write Some Integration Tests

So for this new test I have added a value to the TestingTheChecks array then I can test my checks. The default check I can check like this

Now I need to change the configurations so that they do not match the defaults and run the checks again

Next we have to change the instance settings so that they match the dbachecks configuration and run the checks and test that they all pass.

We will (of course) use dbatools for this. First we need to find the command that we need

and then work out how to use it

There is an example that does exactly what we want 🙂 So we can run this.

Run the Integration Tests

And then we will check that all of the checks are passing and failing as expected

Integration Test For Error Log Counts

There is another integration test there for the error logs count. This works in the same way. Here is the code

Merge the Changes

So with all the tests passing I can merge the PR into the development branch and Azure DevOps will start a build. Ultimately, I would like to add the integration to the build as well following André‘s blog post but for now I used the GitHub Pull Request extension to merge the pull request into development which started a build and then merged that into master which signed the code and deployed it to the PowerShell gallery as you can see here and the result is

https://www.powershellgallery.com/packages/dbachecks/1.1.164