Using Azure DevOps Build Pipeline Templates with Terraform to build an AKS cluster

In the last few posts I have moved from building an Azure SQL DB with Terraform using VS Code to automating the build process for the Azure SQL DB using Azure DevOps Build Pipelines to using Task Groups in Azure DevOps to reuse the same Build Process and build an Azure Linux SQL VM and Network Security Group. This evolution is fantastic but Task Groups can only be used in the same Azure DevOps repository. It would be brilliant if I could use Configuration as Code for the Azure Build Pipeline and store that in a separate source control repository which can be used from any Azure DevOps Project.

Luckily, you can ūüėČ You can use Azure DevOps Job Templates to achieve this. There is a limitation at present, you can only use them for Build Pipelines and not Release Pipelines.

The aim of this little blog series was to have a single Build Pipeline stored as code which I can use to build any infrastructure that I want with Terraform in Azure and be able to use it anywhere

Creating a Build Pipeline Template

I created a GitHub repository to hold my Build Templates, feel free to use them as a base for your own but please don’t try and use the repo for your own builds.

The easiest way to create a Build Template is to already have a Build Pipeline. This cannot be done from a Task Group but I still have the Build Pipeline from my automating the build process for the Azure SQL DB using Azure DevOps Build Pipelines blog post.

There is a View YAML button. I can click this to view the YAML definition of the Build Pipeline

I copy that and paste it into a new file in my BuildTemplates repository. (I have replaced my Azure Subscription information in the public repository)

Now I can use this yaml as configuration as code for my Build Pipeline ūüôā It can be used from any Azure DevOps project. Once you start looking at the code and the documentation for the yaml schema you can begin to write your pipelines as YAML, but sometimes it is easier to just create build pipeline or even just a job step in the browser and click the view yaml button!

Create an AKS Cluster with a SQL 2019 container using Terraform and Build templates

I have a GitHub Repository with the Terraform code to build a simple AKS cluster. This could not have been achieved without Richard Cheney’s article I am not going to explain how it all works for this blog post or some of the negatives of doing it this way. Instead lets build an Azure DevOps Build Pipeline to build it with Terraform using Configuration as Code (the yaml file)

I am going to create a new Azure DevOps Build Pipeline and as in the previous posts connect it to the GitHub Repository holding the Terraform code.

This time I am going to choose the Configuration as code template

I am going to give it a name and it will show me that it needs the path to the yaml file containing the build definition in the current repository.

Clicking the 3 ellipses will pop-up a file chooser and I pick the build.yaml file

The build.yaml file looks like this. The name is the USER/Repository Name and the endpoint is the name of the endpoint for the GitHub service connection in Azure DevOps. The template value is the name of the build yaml file @ the name given for the repository value.

You can find (and change) your GitHub service connection name by clicking on the cog bottom left in Azure DevOps and clicking service connections

I still need to create my variables for my Terraform template (perhaps I can now just leave those in my code?) For the AKS Cluster build right now I have to add presentation, location, ResourceGroupName, AgentPoolName, ServiceName, VMSize, agent_count

Then I click save and queue and the job starts running

If I want to edit the pipeline it looks a little different

The variables and triggers can be found under the 3 ellipses on the top right

It also defaults the trigger to automatic deployment.

It takes a bit longer to build

and when I get the Terraform code wrong and the build fails, I can just alter the code, commit it, push and a new build will start and the Terraform will work out what is built and what needs to be built!

but eventually the job finishes successfully

and the resources are built

and in Visual Studio Code with the Kubernetes extension installed I can connect to the cluster by clicking the 3 ellipses and Add Existing Cluster

I choose Azure Kubernetes Services and click next

Choose my subscription and then add the cluster

and then I can explore my cluster

I can also see the dashboard by right clicking on the cluster name and Open Dashboard

Right clicking on the service name and choosing describe

shows the external IP address, which I can put into Azure Data Studio and connect to my container

So I now I can source control my Build Job Steps and hold them in a central repository. I can make use of them in any project. This gives me much more control and saves me from repeating myself repeating myself. The disadvantage is that there is no handy warning when I change the underlying Build Repository that I will be affecting other Build Pipelines and there is no easy method to see which Build Pipelines are dependent on the build yaml file

Happy Automating

Advertisements

Using the same Azure DevOps build steps for Terraform with different Pipelines with Task Groups to build an Azure Linux SQL VM

In my last post I showed how to build an Azure DevOps Pipeline for a Terraform build of an Azure SQLDB. This will take the terraform code and build the required infrastructure.

The plan all along has been to enable me to build different environments depending on the requirement. Obviously I can repeat the steps from the last post for a new repository containing a Terraform code for a different environment but

If you are going to do something more than once Automate It

who first said this? Anyone know?

The build steps for building the Terraform are the same each time (if I keep a standard folder and naming structure) so it would be much more beneficial if I could keep them in a single place and any alterations to the process only need to be made in the one place ūüôā

Task Groups

Azure DevOps has task groups. On the Microsoft Docs web-page they are described as


task group allows you to encapsulate a sequence of tasks, already defined in a build or a release pipeline, into a single reusable task that can be added to a build or release pipeline, just like any other tas


https://docs.microsoft.com/en-us/azure/devops/pipelines/library/task-groups?view=azure-devops

If you are doing this with a more complicated existing build pipeline it is important that you read the Before You Create A Task Group on the docs page. This will save you time when trying to understand why variables are not available (Another grey hair on my beard!)

Creating A Task Group

Here’s the thing, creating a task group is so easy it should be the default way you create Azure DevOps Pipelines. Let me walk you through it

I will use the Build Pipeline from the previous post. Click edit from the build page

Then CTRL and click to select all of the steps

Right Click and theres a Create Task Group button to click !

You can see that it has helpfully added the values for the parameters it requires for the location, Storage Account and the Resource Group.

Remember the grey beard hair above? We need to change those values to use the variables that we will add to the Build Pipeline using

Once you have done that click Create

This will also alter the current Build Pipeline to use the Task Group. Now we have a Task Group that we can use in any build pipeline in this project.

Using the Task Group with a new Build Pipeline to build an Azure Linux SQL VM

Lets re-use the build steps to create an Azure SQL Linux VM. First I created a new GitHub Repository for my Terraform code. Using the docs I created the Terraform to create a resource group, a Linux SQL VM, a virtual network, a subnet, a NIC for the VM, a public IP for the VM, a netwwork security group with two rules, one for SQL and one for SSH. It will look like this

The next step is to choose the repository

again we are going to select Empty job (although the next post will be about the Configuration as Code ūüôā

As before we will name the Build Pipeline and the Agent Job Step and click the + to add a new task. This time we will search for the Task Group name that we created

I need to add in the variables from the variable.tf in the code and also for the Task Group

and when I click save and queue

It runs for less than 7 minutes

and when I look in the Azure portal

and I can connect in Azure Data Studio

Altering The Task Group

You can find the Task Groups under Pipelines in your Azure DevOps project

Click on the Task Group that you have created and then you can alter, edit it if required and click save

This will warn you that any changes will affect all pipelines and task groups that are using this task group. To find out what will be affected click on references


which will show you what will be affected.

Now I can run the same build steps for any Build Pipeline and alter them all in a single place using Task Groups simplifying the administration of the Build Pipelines.

The next post will show how to use Azure DevOps templates to use the same build steps across many projects and build pipelines and will build a simple AKS cluster

The first post showed how to build an Azure SQLDB with Terraform using VS Code

The second post showed how to use Azure DevOps Task Groups to use the same build steps in multiple pipelines and build an Azure Linux SQL Server VM

Happy Automating!

Getting SQL Services, Starting, Stopping and Restarting them with dbatools

There was a question in the #dbatools slack channel 

dbatools question

Getting dbatools

dbatools enables you to administer SQL Server with PowerShell. To get it simply open PowerShell run

You can find more details on the web-site

Finding the Command

To find a command you can use the dbatools command Find-DbaCommand
For commands for service run

There are a whole bundle returned

find services.png

This is how you can find any dbatools command. There is also a -Tag parameter on Find-DbaCommand.

This returns

find services tag.png

How to use any PowerShell command

Always always start with Get-Help

get help.png

This will show you all the information about the command including examples ūüôā

help examples.png

All of these commands below require that the account running the PowerShell is a Local Admin on the host.

One Host Many Hosts

Now I have used just one host for all of the examples on this page. Do not be fooled, you can always use an array of hosts wherever I have $ComputerName you can set it to as many hosts as you like

You can even get those names form a database, Excel sheet, CMS.

Getting the Services

So to get the services on a machine run

getting servies 1.png

You can output into a table format.


I will use the alias ft for this in some of the examples, that is fine for the command line but use the full command name in any code that you write that other people use

services table.png
You have an object returned so you can output to anything if you want – CSV, JSON, text file, email, azure storage, database, the world is your oyster.

Getting the Services for one instance

The Get-DbaService command has a number of parameters. There is an InstanceName parameter enabling you to get only the services for one instance. If we just want the default instance services

default instances.png

Just the MIRROR instance services

mirror instances.png

Getting just the Engine or Agent services

You can also use the -Type parameter to get only services of a particular type. You can get one of the following: “Agent”,”Browser”,”Engine”,”FullText”,”SSAS”,”SSIS”,”SSRS”, “PolyBase”

So to get only the Agent Services

agent services.png
You can combine the InstanceName and the Type parameters to get say only the default instance engine service
default engine service.png

Starting and stopping and restarting services

You can use Start-DbaService and Stop-DbaService to start and stop the services. They each have ComputerName, InstanceName and Type parameters like Get-DbaService.

So if after running

you find that all services are stopped

all stopped.png

Start All the Services

You can run

and start them all

start them all.png

The full text service was started with the engine service which is why it gave a warning. You can see this if you have all of the services stopped and just want to start the engine services with the type parameter.

all stopped - start engine.png

If you just want to start the Agent services, you can use

start agent.png

You can start just the services for one instance

start instance services.png

Stopping the services

Stopping the services works in the same way. Lets stop the MIRROR instance services we have just started. This will stop the services for an instance

stopping instance services.png

We can stop them by type as well, although this will show an extra requirement. If we start our MIRROR instance services again and then try to stop just the engine type.

cant stop.png

You will get a warning due to the dependant services

WARNING: [10:31:02][Update-ServiceStatus] (MSSQL$MIRROR on SQL0) The attempt to stop the service returned the following error: The service cannot be stopped because other services that are running are dependent on it.
WARNING: [10:31:02][Update-ServiceStatus] (MSSQL$MIRROR on SQL0) Run the command with ‘-Force’ switch to force the restart of a dependent SQL Agent

So all you have to do is use the force Luke (or whatever your name is!)

Use the force.png

You can also stop the services for an entire host, again you will need the Force parameter.

stop all of them.png

Restarting Services

It will come as no surprise by now to learn that Restart-DbaService follows the same pattern. It also has ComputerName, InstanceName and Type parameters like Get-DbaService, Start-DbaService and Stop-DbaService (Consistency is great, It’s one of the things that is being worked on towards 1.0 as you can see in the¬†Bill of Health)

Again you will need the -Force for dependant services, you can restart all of the services on a host with

restart tehm all.png

or just the services for an instance

restart instance.png

or just the Agent Services

restart agent.png

Doing a bit of coding

Now none of that answers @g-kannan’s question. Restarting only services with a certain service account.

With PowerShell you can pipe commands together so that the results of the first command are piped into the second. So we can get all of the engine services on a host for an instance with Get-DbaService and start them with Start-DbaService like this

start.png

or get all of the engine services for an instance on a host and stop them

stop one isntance.png

or maybe you want to get all of the service that have stopped

stopped services.png

You can do the same thing with syntax that may make more sense to you if you are used to T-SQL as follows

T SQL syntax powershell.png

and then start only those services you could do

start the stopped ones.png

(note – you would just use Start-DbaService in this case as it wont start services that are already started!)

only one service.png

Come On Rob! Answer the question!

So now that you know a lot more about these commands, you can restart only the services using a particular service account by using Get-DbaService to get the services

services by start name.png

and then once you know that you have the right ‘query’ you can pipe that to Restart-DbaService (Like making sure your SELECT query returns the correct rows for your WHERE clause before running the DELETE or UPDATE)

restarting only one.png

Happy Automating !

Deploying To a Power Bi Report Server with PowerShell

Just a quick post to share some code that I used to solve a problem I had recently.

I needed to automate the deployment of some Power Bi reports to a Power Bi Report Server PBRS using TFS. I had some modified historical validation dbachecks pbix files that I wanted to automate the deployment of and enable the client to be able to quickly and simply deploy the reports as needed.

The manual way

It is always a good idea to understand how to do a task manually before automating it. To deploy to PBRS you need to use the Power Bi Desktop optimised for Power Bi Report Server. There are instructions here. Then it is easy to deploy to the PBRS by clicking file and save as and choosing Power Bi Report Server

manual deploy

If I then want to set the datasource to use a different set of credentials I navigate to the folder that holds the report in PBRS and click the hamburger menu and Manage

manage

and I can alter the User Name and Password or the type of connection by clicking on DataSources

testconn.PNG

and change it to use the reporting user for example.

Automation

But I dont want to have to do this each time and there will be multiple pbix files, so I wanted to automate the solution. The end result was a VSTS or TFS release process so that I could simply drop the pbix into a git repository, commit my changes, sync them and have the system deploy them automatically.

As with all good ideas, I started with a google and found this post by Bill Anton which gave me a good start ( I could not get the connection string change to work in my test environment but this was not required so I didnt really examine why)

I wrote a function that I can use via TFS or VSTS by embedding it in a PowerShell script. The function requires the ReportingServicesTools module which you can get by

The function below is available via the PowerShell Gallery also and you can get it with

The source code is on Github

and the code to call it looks like this

code1.PNG

which uploads the report to a folder which it will create if it does not exist. It will then upload pbix file, overwriting the existing one if it already exists

numbe3r1.PNG

and uses the username and password specified

code2.PNG

If I wanted to use a Domain reporting user instead I can do

and it changes
code4 reporting
If we want to use a SQL Authenticated user then
sql auth.PNG
Excellent, it all works form the command line. You can pass in a credential object as well as username and password. The reason I enabled username and password? So that I can use TFS or VSTS and store my password as a secret variable.
Now I simply create a repository which has my pbix files and a PowerShell script and build a quick release process to deploy them whenever there is a change ūüôā
The deploy script looks like
Although the function does not need to be embedded in the script and can be deployed in a module, I have included it in here to make it easier for people to use quickly. I
Then create a PowerShell step in VSTS or TFS and call the script with the parameters as shown below and PowerBi files auto deploy to Power Bi Report Server
vsts.PNG
and I have my process complete ūüôā
Happy Automating ūüôā

Using the PowerShell AST to find a ForEach Method

In dbachecks¬†we enable people to see what checks are available by running Get-DbcCheck. This gives a number of properties including the ‘type’ of check. This refers to the configuration item or parameter that is required to have a value for this check to run.

For example – Any check to do with SQL Agent is of type Sqlinstance because it requires an instance to be specified but a check for SPN is of type ComputerName because it requires a computer name to run.

Automation for the win

Because I believe in automation I do not want to have to hard code these values anywhere but create them when the module is imported so we use a json file to feed Get-DbcCheck and populate the Json file when we import the module. This is done using the method that I described here and means that whenever a new check is added it is automatically available in Get-DbcCheck without any extra work.

We use code like this
First we parse the code with the AST and store that in the¬†CheckFileAST variable, then we use the FindAll method to find any command elements that match “Describe” which conveniently gets our describes and then we can simply match the Parent object which holds some code to each function that we use to get our values to be passed to the tests¬†Get-ComputerName,¬†Get-Instance,¬†Get-ClusterObject and set the type appropriately.
which when run against a check like this
will find the describe block and get the title “Backup Path Access”¬† and the tags BackupPathAccess, Storage, DISA, $filename and then find the¬†Get-Instance and set the type to SqlInstance

Until Rob breaks it!

This has worked wonderfully well for 6 months or so of the life of dbachecks but this week I broke it!
The problem was the performance of the code. It is taking a long time to run the tests and I am looking at ways to improve this. I was looking at the Server.Tests file because I thought why not start with one of the smaller files.
It runs the following checks
Server Power Plan Configuration
SPNs
Disk Space
Ping Computer
CPUPrioritisation
Disk Allocation Unit
Instance Connection
and it was looping through the computer names for each check like this
I altered it to have only one loop for the computer names like so
and immediately in testing my checks for the Server Tag decreased in time by about 60% ūüôā
I was very happy.
Then I added it to the dbachecks module on my machine, loaded the module and realised that my Json file for Get-DbcCheck was no longer being populated for the type because this line
was no longer true.

AST for other things

So I googled Management.Automation.Language.Ast the first result lead me to docs.microsoft There are a number of different language elements available there and I found InvokeMemberExpressionAst which will let me find any methods that have been invoked, so now I can find the loops with
When I examined the object returned I could see that I could further limit the result to get only the method for Get-ComputerName and then if I choose the Extent I can get the code of that loop
and now Get-DbcCheck is returning the right results and the checks are a little faster
You can find dbachecks on the PowerShell Gallery or install it using

 

Write Your first Pester Test Today

I was in Glasgow this Friday enjoying the fantastic hospitality of the Glasgow SQL User Group @SQLGlasgow and presenting sessions with Andre Kamman, William Durkin and Chrissy LeMaire

I presented “Green is Good Red is Bad – Turning your checklists into Pester Tests”. I had to make sure I had enough energy beforehand so I treated myself to a fabulous burger.

20171110_114933-compressor.jpg

Afterwards I was talking to some of the attendees and realised that maybe I could show how easy it was to start writing your first Pester test. Here are the steps to follow so that you can  write your first Pester test

Decide the information you wish to test
Understand how to get it with PowerShell
Understand what makes it pass and what makes it fail
Write a Pester Test

The first bit is up to you. I cannot decide what you need to test for on your servers in your environments. Whatever is the most important. For now pick one thing.

Logins – Lets pick logins as an example for this post. It is good practice to disable the sa account is advice that you will read all over the internet and is often written into estate documentation so lets write a test for that

Now we need the PowerShell command to return the information to test for. We need a command that will get information about logins on a SQL server and if it can return disabled logins then all the better.

As always when starting to use PowerShell with SQL Server I would start with dbatools if we run Find-DbaCommand we can search for commands in the module that support logins. (If you have chosen something none SQL Server related then you can use Get-Command or the internet to find the command you need)

find-dbacommand.png

Get-DbaLogin . That looks like the one that we want. Now we need to understand how to use it. Always always use Get-Help to do this. If we run

we get all of the information about the command and the examples. Example 8 looks like it will help us

get-dbalogin example

So now try running the command for our disabled sa account

disabled sa account

So we know that if we have a disabled sa account we get a result. Lets enable the sa account and run the command again

not disabled.png

We don’t get a result. Excellent, now we know what happens for a successful test – we get one result and for failed test we get zero results. We can check that by running

login count

The first one has the account disabled and the second one not. So now we can write our Pester Test. We can start with a Describe Block with a useful title. I am going to add a context block so that you can see how you can group your tests.

describe context

and then we will write our test. Pester Tests use the It keyword. You should always give a useful title to your test

it should

Now we can write our test. We know that the command returns one result when we want it to pass so we can write a test like this

login test.png

The code I have added is

which is
  • the code for getting the information about the thing we wanted to test (The count of the disabled sa logins on the instance)
  • a pipe symbol |
  • The Should key word
  • The Be keyword
  • and the result we want to pass the test (1)

Ta Da! One Pester test written. You can run the test just by highlighting the code and running it in VS Code (or PowerShell ISE) and it will look like this for a passing test

passing test

It is better to save it for later use and then call it with Invoke-Pester

invoke

So now you can write your first Pester test. Just find the PowerShell to get the information that you need, understand what the results will be for passing and failing tests and write your test ūüôā

Getting the Latest Version of the Module

The magnificent Steve Jones wrote about getting the latest version of Pester and the correct way to do it. You can find the important information here

Spend a Whole Day With Chrissy & I at SQLBits

If you would like to spend a whole day with Chrissy LeMaire and I at SQLBits in London in February Рwe have a pre-con on the Thursday
You can find out more about the pre-con sqlps.io/bitsprecon
and you can register at sqlps.io/bitsreg

Using Plaster To Create a New PowerShell Module

Chrissy, CK and I presented a pre-con at PASS Summit in Seattle last week

20171031_083328.jpg

Tracey Boggiano T | B came along to our pre-con and afterwards we were talking about creating PowerShell modules. In her blog post she explains how she creates modules by copying the code from another module (dbatools in this case!) and altering it to fit her needs. This is an absolutely perfect way to do things, in our pre-con we mentioned that there is no use in re-inventing the wheel, if someone else has already written the code then make use of it.

I suggested however that she used the PowerShell module Plaster to do this. We didnt have enough time to really talk about Plaster, so Tracy, this is for you (and I am looking forward to your blog about using it to ūüėČ )

What is Plaster?

Plaster is a template-based file and project generator written in PowerShell. Its purpose is to streamline the creation of PowerShell module projects, Pester tests, DSC configurations, and more. File generation is performed using crafted templates which allow the user to fill in details and choose from options to get their desired output.

How Do I Get Plaster?

The best way to get Plaster is also the best way to get any PowerShell module, from the PowerShell Gallery

You can just run

If you get a prompt about the repository not being trusted, don’t worry you can say yes.

Following PowerShell’s Security Guiding Principles, Microsoft doesn’t trust its own repository by default. The advice as always is never trust anything from the internet even if a bearded fellow from the UK recommends it!!

The PowerShell Gallery is a centralised repository where anyone can upload code to share and whilst all uploads are analyzed for viruses and malicious code by Microsoft, user discretion is always advised. If you do not want to be prompted every time that you install a module then you can run

if you and/or your organisation think that that is the correct way forward.

What Can We Do With Plaster?

Now that we have installed the module we can get to the nitty gritty. You can (and should) use Plaster to automate the creation of your module structure. If you are going to something more than once then automate it!

I created a repository for my Plaster Template You are welcome to take it and modify it for your own needs. I created a folder structure and some default files that I always want to have in my module folder

module framework.png

So in my template I have created all of the folders to organise the files in the way that I want to for my modules. I have also included the license file and some markdown documents for readme, contributing and installation. If we look in the tests folder

tests folder.png

There are some default test files included as well.

But Plaster is more than just a file and folder template repository, if we look in the installation markdown file,  it looks like this

We can paramatarise the content of our files. This will create a very simple markdown showing how to find and install the module from the PowerShell Gallery which saves us from having to type the same thing again and again. Lets see how to do that

The Manifest XML file

The magic happens in the manifest file You can create one with the New-PlasterManifest command in the template directory – Thank you to Mustafa for notifying that the manifest file creation now requires an extra parameter of TemplateType

This will create a PlasterManifest.xml file that looks like this
You can see that the parameters and content tags are empty. This is where we will define the parameters which will replace the tokens in our files and the details for how to create our module folder.

Plaster Parameters

At present my parameters tag looks like this
So we can set up various parameters with their names and data types defined and a prompt and if we want a default value.
We can then use
in our files to make use of the parameters.

Plaster Content

The other part of the manifest file to create is the content. This tells Plaster what to do when it runs.

Mine is split into 3 parts

We can provide messages to the user with the message tag. I create the folders using the filesource tag
This part creates all of the required files. You can see that the static files (those which do not require any sort of parameterisation for the contents use the same file source tag as the folders with the source defined. The files that have content which is parameterised use a tag of templateFile Source telling Plaster to look inside there for the tokens to be replaced.
The last part of the content creates the module manifest.
which I have filled in with the parameters for each of the values.

Creating a new module at the command line

Now you can easily create a module with all of the required folders and files that you want by creating a directory and running

which looks like this

Its that easy ūüôā

Create a module without prompts

You can also create a module without needing to answer prompts. We can prefill them in our parameter splat
Which will look like this

Make Your Own

Hopefully this have given you enough information and shown you how easy it is to automate creating the framework for your new PowerShell modules and parameterising them. Let me know how you get on and share your examples

Further Reading

Kevin Marquettes blog post is an excellent and detailed post on using Plaster which you should also read for reference as well as David Christians post which has some great content on adding user choice to the parameters enabling one plaster template to fulfill multiple requirements.

TSQL2sDay – Get-PostRoundup

First an apology, this round up is late!

The reason for that is an error in the PowerShell testing module Pester (That’s not completely true as you shall see!!)

I spoke in Stuttgart at the PowerShell Saturday last weekend and had intended to write this blog post whilst travelling, unfortunately I found a major error in Pester (again not strictly true but it makes a good story!!)

I explained it with this slide in my presentation

Yep, I forgot to pack my NUC with my VMs on it and had to re-write all my demos!!

But anyway, on to the TSQL2sDay posts

What a response. You wonderful people. I salute you with a Rimmer salute

There are 34 TSQL2sDay posts about dbatools, about starting with PowerShell, If you should learn PowerShell, SSAS, SSRS, Log Shipping, backups, restores, Pester, Default settings, best practices, migrations, Warnings in Agent Jobs, sqlpackage, VLFs, CMS, Disabling Named Pipes, Orphaned users, AG Status, AG Agent Jobs, logging, classes, auditing, copying files, ETL and more.

I am really pleased to see so many first timers to the TSQL2sDay blog monthly blog party. Please don’t let this be your only TSQL2sDay post. Come back next month and write a post on that topic.

Here they are below in the media of tweets, so that you can also go and follow these wonderful people who are so willing to share their knowledge. Say thank you to them, ask them questions, interact.

Learn, Share, Network

Volker wrote about testing best practices with dbatools

Dave explains why PowerShell is so useful to him in his ETL processes

Steve writes about the time he has saved using PowerShell to automate restores and audit SQL Server instances

Nate talks about copying large files like SQL Server backups using BITS with PowerShell

Warren talks about his experience as a beginner, the amount of things he automates and his DBReboot module

THANK YOU every single one and apologies if I have missed anyone!

 

 

Testing the Identity Column usage in SQL Server with PowerShell and dbatools

SQL Server uses identity columns to auto generate values, normally keys. When you create an identity column, it has a data type and that data type has a maximum number of values.

  • BigInt 9,223,372,036,854,775,808
  • Int 2,147,483,647
  • SmallInt 32,767
  • tinyint 255

What happens when you try to insert a value in an identity column that is greater than the maximum value? You get an error and a failed transaction. Lets do that

Using AdventureWorks, I know (I’ll show how in a minute) that the HumanResources.Shift column is a tinyint. So the highest value for the ShiftID column is 255.

If we run

USE AdventureWorks2014;
GO
INSERT INTO [HumanResources].[Shift]
([Name]
,[StartTime]
,[EndTime]
,[ModifiedDate])
VALUES
( 'Made Up SHift ' + CAST(NEWID() AS nvarchar(MAX))
,DATEADD(hour,-4, GetDate())
,'07:00:00.0000000'
,GetDate())
WAITFOR DELAY '00:00:00.050';
GO 252
Adding a number after GO says run this that many times, so we have added 252 rows to the existing 3 rows.
01 - maxx value.PNG

 

So what happens if we try to add another row?

USE AdventureWorks2014;
GO
INSERT INTO [HumanResources].[Shift]
([Name]
,[StartTime]
,[EndTime]
,[ModifiedDate])
VALUES
( 'Made Up SHift ' + CAST(NEWID() AS nvarchar(MAX))
,DATEADD(hour,-4, GetDate())
,'07:00:00.0000000'
,GetDate())
GO
02- error.PNG
We get an error
Msg 8115, Level 16, State 1, Line 4
Arithmetic overflow error converting IDENTITY to data type tinyint.
Arithmetic overflow occurred.
If that is a table that is important to your system, a logging table or worse, an order table then there is quickly going to be phone calls, visits to your desks, arm waving etc until you get it resolved. Lets clean up our mess
USE AdventureWorks2014
GO
DELETE FROM HumanResources.Shift
WHERE ShiftId > 3
GO
DBCC CHECKIDENT ('HumanResources.Shift', RESEED, 3)
GO
It would be very useful to be able to quickly see what the current values of the identity columns are and how close they are to being full so that we can plan for and be able to take action before we end up with shouty smart suits at our desk. If we could do it with just one line of code that would be even easier.
Step forward dbatools.  This PowerShell module is a community based project written by excellent, brilliant people in their own time and available to you free. To find out more and how to use and install it visit https://dbatools.io
There is a command called Test-DbaIdentityUsage This command was created by Brandon Abshire. You can find Brandon blogging at netnerds.net. Thank you Brandon
As always with a new PowerShell command you should always start with Get-Help
Get-Help Test-DbaIdentityUsage -ShowWindow
03 - get help.PNG

 

The command has a few parameters

  • SqlInstance – One or many Instances
  • SqlCredential – for SQL Authentication
  • Databases – to filter for databases ( This is a dynamic parameter and doesn’t show in the Help)
  • Threshold – define a minimum percentage for how full the identity column is
  • NoSystemDB – to ignore the system databases

So we can run the command against one instance

Test-DbaIdentityUsage -SqlInstance sql2014ser12r2

 

04 - one server.PNG

This returns an object for each identity column in each database on the instance. The object has the following properties

ComputerName   : SQL2014SER12R2
InstanceName   : MSSQLSERVER
SqlInstance    : SQL2014SER12R2
Database       : AdventureWorks2014
Schema         : HumanResources
Table          : Shift
Column         : ShiftID
SeedValue      : 1
IncrementValue : 1
LastValue      : 3
MaxNumberRows  : 254
NumberOfUses   : 3
PercentUsed    : 1.18

We can use the objects returned from this command in a number of ways, this is one of the beauties of PowerShell that we can interact with numerous systems. I have blogged about some simple ways of doing this here but your only limit is your imagination.

I love to use Out-GridView as it enables quick and easy sorting of the returned data

06 - ogv filter.gif

The databases parameter is dynamic so it will prefill the names of the databases on the instance. This is what it looks like in VS Code

07 vscode tab.gif

 

and in ISE

08 ise tab.gif

 

We can use the threshold parameter to only show results for the identity columns whose value is above a percent of the max value for the column. Lets fill the ShiftId column to above 90% and show this

USE AdventureWorks2014;
GO
INSERT INTO [HumanResources].[Shift]
([Name]
 ,[StartTime]
,[EndTime]
,[ModifiedDate])
VALUES
( 'Made Up SHift ' + CAST(NEWID() AS nvarchar(MAX))
,DATEADD(hour,-4, GetDate())
,'07:00:00.0000000'
,GetDAte())
WAITFOR DELAY '00:00:00.050';
GO 230

and now run

Test-DbaIdentityUsage -SqlInstance sql2014ser12r2  -Threshold 90

08 - threshold.PNG

Don’t forget to use the cleanup script. You can pass a whole array of SQL instances to the command. We can pass an array of SQL servers to this command as well and check multiple servers at the same time. In this example, I am querying my Hyper-V server for all VMs with SQL in the name,except for my broken SQL2008 box ,that are running.¬†Just to get some results I will set the threshold to 1

$SQLServers = (Get-VM -ComputerName beardnuc | Where-Object {$_.Name -like '*SQL*' -and $_.Name -ne 'SQL2008Ser2008' -and $_.State -eq 'Running'}).Name
Test-DbaIdentityUsage -SqlInstance $SQLServers -Threshold 1 | Out-GridView
10 ogv thredshold.PNG
As you can see this function does not support SQL instances lower than SQL 2008 and you will get warnings for availability group databases
It’s quick too, finishing in less than 2 seconds in my lab of 10 SQL Servers and 125 databases. The WarningAction SilentlyContinue supresses the yellow warnings
11 - measure command.PNG
This is ideal for using Pester to test.
 Describe "Testing how full the Identity columns are" {
$SQLServers = (Get-VM -ComputerName beardnuc | Where-Object {$_.Name -like '*SQL*' -and $_.Name -ne 'SQL2008Ser2008' -and $_.State -eq 'Running'}).Name
$testCases= @()
$SQLServers.ForEach{$testCases += @{Name = $_}}
It "<Name> databases all have identity columns less than 90% full" -TestCases $testCases {
Param($Name)
(Test-DbaIdentityUsage -SqlInstance $Name -Threshold 90 -WarningAction SilentlyContinue).PercentUsed | Should Be
}
}
12 pester test.PNG
An excellent quick test but it doesn’t show us which databases have failed. We can iterate through our servers and databases like this
Describe "Testing how full the Identity columns are" {
    $SQLServers = (Get-VM -ComputerName beardnuc | Where-Object {$_.Name -like '*SQL*' -and $_.Name -ne 'SQL2008Ser2008' -and $_.State -eq 'Running'}).Name
    foreach($SQLServer in $SQLServers)
    {
        Context "Testing $SQLServer" {
            $dbs = (Connect-DbaSqlServer -SqlServer $SQLServer).Databases.Name
            foreach($db in $dbs)
            {
                It "$db on $SQLServer identity columns are less than 90% full" {
                    (Test-DbaIdentityUsage -SqlInstance $SQLServer -Databases $db -Threshold 90 -WarningAction SilentlyContinue).PercentUsed | Should Be
                }
            }
        }
    }
}
This is using the Connect-DbaSqlServer to create a SMO object and then gathering the databases on the server into a variable and iterating through them
It looks like this when it is running
13 - pester test.png
and at the end gives you a little overview of the number of tests that have failed
14 end of pester test.png
In a previous post I showed how you can output these results to XML or even make a HTML page showing the output
But perhaps that isn’t granular enough for you and you want a test for each column. This is how you could do that
$SQLServers = (Get-VM -ComputerName beardnuc | Where-Object {$_.Name -like '*SQL*' -and $_.Name -ne 'SQL2008Ser2008' -and $_.State -eq 'Running'}).Name
foreach($SQLServer in $SQLServers)
{
    Describe "$SQLServer - Testing how full the Identity columns are" {
            $dbs = (Connect-DbaSqlServer -SqlServer $SQLServer).Databases.Name
            foreach($db in $dbs)
            {
                Context "Testing $db" {
                $Tests = Test-DbaIdentityUsage -SqlInstance $SQLServer -Databases $db -WarningAction SilentlyContinue
                foreach($test in $tests)
                {
                    It "$($test.Column) identity column in $($Test.Table) is less than 90% full" {
                        $Test.PercentUsed | Should BeLessThan 90
                    }
                }
            }
        }
    }
}
Which looks like this, a test for each identity column in each database in each server in your environment
15 every pester teest.PNG

 

The other question that we have to answer these days is – Does it work with SQL on Linux? We will have to pass a SQL authentication credential and this time I will use Format-Table for the output

 Test-DbaIdentityUsage -SqlInstance LinuxvNextCTP14 -SqlCredential (Get-Credential) | Format-Table

16 - on Linux.PNG

Happy Automating!

NOTE – The major 1.0 release of dbatools due in the summer 2017 may have breaking changes which will stop the above code from working. There are also new commands coming which may replace this command. This blog post was written using dbatools version 0.8.942 You can check your version using

 Get-Module dbatools

and update it using an Administrator PowerShell session with

 Update-Module dbatools

You may find that you get no output from Update-Module as you have the latest version. If you have not installed the module from the PowerShell Gallery using

Install-Module dbatools

Then you can use

Update-dbatools

 

 

 

Using Pester with Get-DbaLastGoodCheckDb from dbatools

In my last post I showed Get-DbaLastGoodCheckDb  from dbatools. This module is a community based project written by excellent, brilliant people in their own time and available to you free. To find out more and how to use and install it visit https://dbatools.io

In a similar fashion to my post about using Pester with Test-DBALastBackup I thought I would write some Pester tests for Get-DbaLastGoodCheckDb as well

Pester provides a framework for running unit tests to execute and validate PowerShell commands from within PowerShell. Pester consists of a simple set of functions that expose a testing domain-specific language (DSL) for isolating, running, evaluating and reporting the results of PowerShell commands.

First we will use Test Cases again to quickly test a number of instances and see if any servers have a database which does not have a successful DBCC Checkdb. We will need to use the -Detailed parameter of Get-DbaLastGoddCheckDb so that we can access the status property. I have filled the $SQLServers variable with the names of my SQLServers in my lab that are running and are not my broken SQL2008 box.

The status property will contain one of three statements

  • Ok (This means that a successful test was run in the last 7 days
  • New database, not checked yet
  • CheckDb should be performed

We want to make sure that none of the results from the command have the second two statements. We can do this by adding two checks in the test and if either fails then the test will fail.

 Describe "Testing Last Known Good DBCC" {
$SQLServers = (Get-VM -ComputerName beardnuc | Where-Object {$_.Name -like '*SQL*' -and $_.Name -ne 'SQL2008Ser2008' -and $_.State -eq 'Running'}).Name
$testCases= @()
$SQLServers.ForEach{$testCases += @{Name = $_}}
It "<Name> databases have all had a successful CheckDB within the last 7 days" -TestCases $testCases {
Param($Name)
$DBCC = Get-DbaLastGoodCheckDb -SqlServer $Name -Detailed
$DBCC.Status -contains 'New database, not checked yet'| Should Be $false
$DBCC.Status -contains 'CheckDb should be performed'| Should Be $false
}
}

We can save this as a .ps1 file (or we can add it to an existing Pester test file and call it will Invoke-Pester or just run it in PowerShell

05 - dbcc pester

As you can see you will still get the same warning for the availability group databases and we can see that SQL2012Ser08AG1 has a database whose status is CheckDB should be performed and SQL2012Ser08AGN2 has a database with a status of New database, not checked yet

That’s good, but what if we run our DBCC Checkdbs at a different frequency and want to test that? We can also test if the databases have had a successful DBCC CheckDb¬†using the¬†LastGoodCheckDb¬†property which will not contain a Null if there was a successful DBCC CheckDb. As Pester is PowerShell we can use

($DBCC.LastGoodCheckDb -contains $null)

and we can use Measure-Object to get the maximum value of the DaysSinceLastGoodCheckdb property like this

($DBCC | Measure-Object -Property  DaysSinceLastGoodCheckdb -Maximum).Maximum
If we put those together and want to test for a successful DBCC Check DB in the last 3 days we have a test that looks like
Describe "Testing Last Known Good DBCC" {
$SQLServers = (Get-VM -ComputerName beardnuc | Where-Object {$_.Name -like '*SQL*' -and $_.Name -ne 'SQL2008Ser2008' -and $_.State -eq 'Running'}).Name
$testCases= @()
$SQLServers.ForEach{$testCases += @{Name = $_}}
It "<Name> databases have all had a successful CheckDB" -TestCases $testCases {
Param($Name)
$DBCC = Get-DbaLastGoodCheckDb -SqlServer $Name -Detailed
($DBCC.LastGoodCheckDb -contains $null) | Should Be $false
}
It "<Name> databases have all had a CheckDB run in the last 3 days" -TestCases $testCases {
Param($Name)
$DBCC = Get-DbaLastGoodCheckDb -SqlServer $Name -Detailed
($DBCC | Measure-Object -Property  DaysSinceLastGoodCheckdb -Maximum).Maximum | Should BeLessThan 3
}
}
and when we call it with invoke-Pester it looks like
06 - dbcc pester.PNG
That’s good but it is only at an instance level. If we want our Pester Test to show results per database we can do that like this
Describe "Testing Last Known Good DBCC" {
$SQLServers = (Get-VM -ComputerName beardnuc | Where-Object {$_.Name -like '*SQL*' -and $_.Name -ne 'SQL2008Ser2008' -and $_.State -eq 'Running'}).Name
foreach($Server in $SQLServers)
{
$DBCCTests = Get-DbaLastGoodCheckDb -SqlServer $Server -Detailed
foreach($DBCCTest in $DBCCTests)
{
It "$($DBCCTest.Server) database $($DBCCTest.Database) had a successful CheckDB"{
$DBCCTest.Status | Should Be 'Ok'
}
It "$($DBCCTest.Server) database $($DBCCTest.Database) had a CheckDB run in the last 3 days" {
$DBCCTest.DaysSinceLastGoodCheckdb | Should BeLessThan 3
}
It "$($DBCCTest.Server) database $($DBCCTest.Database) has Data Purity Enabled" {
$DBCCTest.DataPurityEnabled| Should Be $true
}
}
}
}
We gather the SQL instances into an array in the same way and this time we loop through each one, put the results of Get-DbaLastGoodCheckDb for that instance into a variable and then iterate through each result and check that the status is Ok, the DaysSinceLastGoodCheckDb is less than 3 and the DataPurityEnabled is true and we have
07 - dbcc pester.PNG

 

You can look at my previous posts on using Pester to see examples of creating XML files or HTML reports from the results of the tests.

Hopefully, as you have read this you have also thought of other ways that you can use Pester to validate the state of your environment. I would love to know how and what you do.

Happy Automating

NOTE – The major 1.0 release of dbatools due in the summer 2017 may have breaking changes which will stop the above code from working. There are also new commands coming which may replace this command. This blog post was written using dbatools version 0.8.942 You can check your version using

 Get-Module dbatools

and update it using an Administrator PowerShell session with

 Update-Module dbatools

You may find that you get no output from Update-Module as you have the latest version. If you have not installed the module from the PowerShell Gallery using

Install-Module dbatools

Then you can use

Update-dbatools