TSQL2sDay – Get-PostRoundup

First an apology, this round up is late!

The reason for that is an error in the PowerShell testing module Pester (That’s not completely true as you shall see!!)

I spoke in Stuttgart at the PowerShell Saturday last weekend and had intended to write this blog post whilst travelling, unfortunately I found a major error in Pester (again not strictly true but it makes a good story!!)

I explained it with this slide in my presentation

Yep, I forgot to pack my NUC with my VMs on it and had to re-write all my demos!!

But anyway, on to the TSQL2sDay posts

What a response. You wonderful people. I salute you with a Rimmer salute

There are 34 TSQL2sDay posts about dbatools, about starting with PowerShell, If you should learn PowerShell, SSAS, SSRS, Log Shipping, backups, restores, Pester, Default settings, best practices, migrations, Warnings in Agent Jobs, sqlpackage, VLFs, CMS, Disabling Named Pipes, Orphaned users, AG Status, AG Agent Jobs, logging, classes, auditing, copying files, ETL and more.

I am really pleased to see so many first timers to the TSQL2sDay blog monthly blog party. Please don’t let this be your only TSQL2sDay post. Come back next month and write a post on that topic.

Here they are below in the media of tweets, so that you can also go and follow these wonderful people who are so willing to share their knowledge. Say thank you to them, ask them questions, interact.

Learn, Share, Network

Volker wrote about testing best practices with dbatools

Dave explains why PowerShell is so useful to him in his ETL processes

Steve writes about the time he has saved using PowerShell to automate restores and audit SQL Server instances

Nate talks about copying large files like SQL Server backups using BITS with PowerShell

Warren talks about his experience as a beginner, the amount of things he automates and his DBReboot module

THANK YOU every single one and apologies if I have missed anyone!

 

 

Presentation Nerves

My previous post on interviews and a number of conversations this year inspired me to write this post. I am lucky enough to have been selected to speak at numerous events over the past few years and I am really lucky because I thoroughly enjoy doing them. The feedback I receive from those sessions has been wonderful and it seems that in general most people really enjoy them.

This leads to some misconceptions though. Recently people have said to me “Oh I am not like you, I get far to nervous to do a session” and also “I am so glad that you get just as nervous as me before presenting I thought it was just me” even though I have blogged about this before. I think it is important for newer speakers as well as more established ones to know that more presenters than you realise get very nervous before they speak.

Many don’t publicise this (which is fine) but I will. I get nervous before I speak. I know that it doesn’t show when I start my presentation but it is there. My stomach does back flips, my hands shake, I forget to bring things to the room. I worry that I will make a catastrophic mistake or that I’ll open my mouth and nothing will come out.

It’s ok. It doesn’t last very long, it’s gone at the moment I start speaking. Other speakers need a few moments into their session before they stop really feeling those nerves but it goes.

Whenever I am involved in a conversation about nerves and presentations on twitter I respond in the same way

I love this quote by Joan Jett (young people link) To me it means that you should be nervous before speaking because that energy will ensure that you give a good presentation. If you get up to do a presentation and you are blasé or complacent about it this will be obvious to your audience and not in a good way.

So what to do?

Practice

You can’t just approach a presentation knowing that you will be nervous and expect it to be ok. You need to have a background of confidence that your presentation will turn out ok.

You need to practice.

You need to practice your presentation.

You need to practice your presentation out loud.

You need to practice your presentation out loud more than once.

You have to get used to hearing your own voice when presenting. It can be off-putting hearing yourself blathering on and you don’t want that to surprise you or interrupt your flow. This will also help with projecting away from your screen and into the room if you practice correctly. Imagine all the people in the room and try to speak in their direction with your head up and not pointing down at the screen.

You also need to practice your timings, so that you know that your session will fit in the allocated time. Make notes of your timings at certain points in your presentation so that when you are presenting your session you can be aware of whether you are still on your expected time. Some people will speak faster in their actual session than the practice and some slower. As you practice and learn you will understand your own rhythm and cadence and be able to alter it if required. This will help you to build that confidence that your presentation will be ok.

More Practice

You need to practice.

You need to practice your demos.

You need to practice your demos more than once.

Being able to reset your demos and run them through will teach you more skills. Using Pester to make sure your environment is in place correctly will help.

Run your demos with your machine set up as it will be for the presentation. If you need to have PowerPoint, SSMS, Visual Studio, Visual Studio Code and three SQL instances running then practice with them all running. You should do this so that your timings when running your demos are the same as when you actual present your session. This is even more important if you are doing a webinar as that software will require some of your machines resources which may slow your demo down.

Knowing that your demos are consistently repeatable and how long they will take will also help to give you the confidence that your presentation will be ok.

Deal with them

If we accept that you will have nerves and that’s not a bad thing you have to be able to deal with them, to use them to make your presentation rock.

This is a distinctly personal thing and I have no idea what will work for you. You will have to try some things and see if they work or not. Recently I found a new way for myself

Normally I like to be in the room I will be presenting in before I do my session as this gives me something that I can listen to, I can see and feel the layout of the room and also usually prepare my laptop with the correct programmes and run Pester to make sure all is as it should be for my demos. In Portugal I was chatting with someone and missed the start of the session and because of the room layout I did not want to disturb the presenter before me. Slava Oks was giving a presentation which I started to watch and it was so mind-melting I completely forgot that I was presenting in the next time slot! Surprisingly, I had almost no time to be nervous and for this time that was a good thing. The fact that I had already opened my presentation and run my Pester tests also helped.

Some speakers like to be amongst the hustle and bustle of a common area. Some like the peace and quiet of a speaker room or work area. Some put their headphones on. Some go outside. Some pace up and down. Some sit quietly. Many sit in a session in the room. Find the one that works for you.

A few deep breaths

Then just before you are giving your presentation take a few deep breathes, reassure yourself that it’s all good and go and be amazing.

Deep breaths will also be useful if you start to feel nervousness overtaking you during your session. Stop, take a deep breath and carry on.

Incidentally, during a presentation in Exeter at my first SQL Saturday I felt decidedly light-headed and as if I was going to pass out. I had literally forgotten to breathe!

What about…… ?

Don’t forget to leave time for questions at the end. Don’t practice to fill all of the allotted time with your presentation. You will need some time for the audience to ask you questions about your presentations.

Having people ask you questions is a good thing. It means that people are engaged in your presentation and interested in what you have shared. Well done, you have achieved what you set out to do and this is some validation

Repeat the question

Repeating the question that you are asked is recommended best practice for presentations but it has another advantage to you. It allows you a little thinking time to organise your thoughts and calm your nerves if needed.

I don’t know

It’s ok to answer a question with I don’t know. Follow up by asking if anyone in the audience can add some value or say I will research that and find out for you come and give me your contact details afterwards.

Feedback

Some events will provide you with feedback from your attendees. You can also ask your friends or other friendly community members for feedback on your session. Use this to improve. Don’t take all the feedback to heart. Look for trends in the data. Don’t let the poor feedback get you down and don’t let the good feedback go to your head (Remember the complacent quote at the top of this post!)

On a side note, whilst providing a score for feedback is useful, what is more useful is some reasoning behind the score. Remember also that the speaker is a human being with feelings. Be kind whilst being constructive.

Your knowledge

Don’t let worry about nerves prevent us from hearing the great knowledge and experience that you have to share. You wont be alone in feeling nervous and you can help yourself to overcome those nerves and get as much out of speaking as I do.

You will find members of the SQL community wiling to help you if you visit the SQL Community Slack you can ask questions in #presentingorspeaking

 

Writing Dynamic and Random Tests Cases for Pester

I have written a module SQLDiagAPI for consuming the SQL Server Diagnostics API with PowerShell. I blogged about how I used Pester to develop one of the functions . Whilst writing Get-SQLDiagFix I wrote some Pester Tests to make sure that the output from the code was as expected.

Pester

For those that don’t know. Pester is a PowerShell module for Test Driven Development

Pester provides a framework for running unit tests to execute and validate PowerShell commands from within PowerShell. Pester consists of a simple set of functions that expose a testing domain-specific language (DSL) for isolating, running, evaluating and reporting the results of PowerShell commands

If you have PowerShell version 5 then you will have Pester already installed although you should update it to the latest version. If not you can get Pester from the PowerShell Gallery follow the instructions on that page to install it. This is a good post to start learning about Pester

The Command Get-SQLDiagFix

Get-SQLDiagFix  returns the Product Name, Feature Name/Area, KB Number, Title and URL for the Fixes in the Cumulative Updates returned from the SQL Server Diagnostics Recommendations API. One Fix looks like this

07 - Get-SQLDiagFix result.png

This is how I wrote the Pester tests for that command

Mocking the results

In my describe block for each function I mock Get-SQLDiagRecommendations. This is the command that each of the current available commands in the module use to get the recommendations from the SQL Server Diagnostic Recommendations API. I did this by creating a json file from the API and saving it in a json folder inside the tests folder

01 - JSON folder.png

I can then mock Get-SQLDiagRecommendations inside a BeforeAll code block using

This means that every time the code in the test calls Get-SQLDiagRecommendations it will not use the internet to connect to the API and return an object. Instead it will return the $Recommendations object which is loaded from a file on the file system. I am not, therefore, depending on any external factors and I have a known set of data for my test results.
I also have a set of mocks in my Output Context code block

The fixes.json is a file which was created from the recommendations.json and only contains the properties returned by GetSQLDiagFix which is what we are testing here. I can set variables for Products and Features using the commands from the module as these will call Get-SQLDiagRecommendations which we have already mocked.

Test All of the Fixes

I can now test that the code I have written for Get-SQLDiagFix returns the correct data without any parameters using this test with Compare-Object.

If there is no difference between the object returned from Get-SQLDiagFix and the $fixes object which uses the json file then the code is working as expected and the test will pass.

Test Cases

I learned about test cases from Mike Robbins blog post. Test cases enable you to provide a hash table of options and loop through the same test for each of them. Here is an example

There are the following products in the Recommendation API

  • SQL Server 2012 SP3
  • SQL Server 2016 SP1
  • SQL Server 2016 RTM
  • SQL Server 2014 SP1
  • SQL Server 2014 SP2
and I want to run a test for each product to check that the fixes returned from Get-SQLDiagFix for that product match the $fixes object filtered by Product for those products. Here is the code

You can click on the image below to see a larger, more readable version.
02 Test Cases.png

The $TestCases variable holds an array of hashtables, one for each product with a Name that matches the parameter that I use in the test and a value of the product name.

I wrote one test, one It code block.  I refer to the product in the title inside <> using the same name as the name in the hashtable. The test (It) needs a parameter of -TestCases with a value (in this example) of the $TestCases variable we have just defined. It also needs a param block with a parameter that matches the Name value from the hashtables.

The expected test results are placed in a $results variable by filtering the $Fixes variable (defined in the BeforeAll code block above) by the parameter $Productname

The test will then run for each of the test cases in the $TestCases variable comparing the results of Get-SQLDiagFix -Product $Productname with the expected results from the $fixes variable

Here are the test results

03 - product test results.png

Multiple Products in Test Cases

I also want to test that Get-SQLDiagFix will work for multiple Products. I need to create TestCases for those too. I do that in exactly the same way

Which looks like this when the tests run
04 - mulitple product test results.png

Single Feature Dynamic Test Cases

Get-SQLDiagFix can also filter the fixes by feature area. The features are returned from Get-SQLDiagFeature. This means that I can create a test for each of the features by using the $features variable which was defined in the BeforeAll block as

Then I can dynamically create test cases using

and the results look like

05 - single feature test results.png

Random Dynamic Multiple Feature Test Cases

I also need to test that Get-SQLDiagFix returns the correct results for multiple features and whilst I could create those by hand like the products example above why not let PowerShell do that for me?

I created 10 test cases. Each one has a random number of features between 2 and the number of features.  I can then write one test to make use of those test cases. This is how I do that

Now there are 10 tests each with a random number of features and the results look like this. Each time the test is run it will use a different set of features for each of the 10 tests but I will know that I am testing that the code will return the correct results for multiple features

06 - multiple features.png

Two Sets of Test Cases?

It is also possible for Get-SQLDiagFix to have one or more products and one or more features passed as parameters, which obviously also need to be tested to ensure the code is returning the correct results. As Pester is just PowerShell we can use normal PowerShell code. This means that I can test for a single product and a single feature using a foreach loop and Test Cases like this

To test for a single product and multiple features I use this code

Because it is dynamically creating the values for the two parameters, I have to check that there are some results to test on line 23 as Compare-Object will throw an error if the object to be compared is empty. I need to do this because it is possible for the test to pick products and features in a combination that there are no fixes in the results.

The reason I have commented it as a risky fix is because if someone changes the code and Get-SQLDiagFix does not return any results then the test would not run and therefore there would be no information from this test that the code had a bug. However, in this suite of tests there are many tests that would fail in that scenario but be careful in your own usage.

I test for multiple products with a single feature and multiple products with multiple features like this

You can see all of the unit tests for the SQLDiagAPI module in my GitHub repository

The module is available on the PowerShell Gallery which means that you can install it using

Creating a PowerShell Module and TDD for Get-SQLDiagRecommendations

Yesterday I introduced the first command in the SQLDiagAPI module. A module to consume the SQL Diagnostics API.

I have been asked a few times what the process is for creating a module, using Github and developing with Pester and whilst this is not a comprehensive how-to I hope it will give some food for thought when you decide to write a PowerShell module or start using Pester for code development. I also hope it will encourage you to give it a try and to blog about your experience.

This is my experience from nothing to a module with a function using Test Driven Development with Pester. There are some details missing in some places but if something doesn’t make sense then ask a question. If something is incorrect then point it out. I plan on never stopping learning!

There are many links to further reading and I urge you to not only read the posts linked but also to read further and deeper. That’s a generic point for anyone in the IT field and not specific to PowerShell. Never stop learning. Also, say thank you to those that have taken their time to write content that you find useful. They will really appreciate that.

Github Repository

I created a new repository in Github and used Visual Studio Code to clone the repository by pressing F1 and typing clone – Choosing Git Clone and following the prompts. I started with this because I was always planning to share this code and because source controlling it is the best way to begin.

Plaster Template

When you create a module there are a number of files that you need and I have a number of generic tests that I add. I also have a structure that I create for the artifacts and a number of markdown documents that come with a GitHub Repository.  Whilst you could write a PowerShell script to create all of those, there is no need as there is PlasterPlaster is a PowerShell module that enables you to set up the default scaffolding for your PowerShell module structure and tokenise some files. This makes it much easier to have a default ‘scaffold’ for the module, a structure for the files and folders and create a new module simply. I used Kevin Marquettes post on Plaster  to create myself a template module. You can find my Plaster Template here 

You do not need to use Plaster at all but as with anything, if you find yourself repeating steps then it is time to automate it

With my Plaster Template created I could simply run

This created my module. It created this folder and file structure and included some default tests and markdown documents pre-populated.

00 - module

Pester

For those that don’t know. Pester is a PowerShell module for Test Driven Development

Pester provides a framework for running unit tests to execute and validate PowerShell commands from within PowerShell. Pester consists of a simple set of functions that expose a testing domain-specific language (DSL) for isolating, running, evaluating and reporting the results of PowerShell commands

If you have PowerShell version 5 then you will have Pester already installed although you should update it to the latest version. If not you can get Pester from the PowerShell Gallery follow the instructions on that page to install it. This is a good post to start learning about Pester

API Key

Now that I have the module I started to think about the commands. I decided to start with the recommendations API which is described as

Customers will be able to keep their SQL Server instances up-to-date by easily reviewing the recommendations for their SQL Server instances. Customers can filter by product version or by feature area (e.g. Always On, Backup/Restore, Column Store, etc.) and view the latest Cumulative Updates (CU) and the underlying hotfixes addressed in the CU.

To use the API you need an API Key. An API Key is a secret token that identifies the application to the API and is used to control access.You can follow the instructions here https://ecsapi.portal.azure-api.net/ to get one for the SQL Server Diagnostics API.

01 - APIKey

I will need to store the key to use it and if I am writing code that others will use consider how they can repeat the steps that I take. I decided to save my API Key using the Export-CliXML command as described by Jaap Brasser here .

You need to enter a username even though it is not used and then enter the API Key as the password. It is saved in the root of the user profile folder as hopefully user accounts will have access there in most shops

TDD

I approached writing this module using Test Driven Development with Pester. This means that I have to write my tests before I write my code. There are many reasons for doing this which are outside the scope of this blog post. This is a very good post to read more

The first function I wanted to write was to get the recommendations from the API. I decide to call it Get-SQLDiagRecommendations.

I decided that the first test should be to ensure that the API Key exists. Otherwise I would not be able to use it when calling the API. I already had an idea of how I would approach it by storing the API Key using Test-Path and writing a warning if the file did not exist.

Mocking

However this is not going to work if I have already saved the key to the file. The test needs to not be reliant on any thing external. I need to be able to test this functionality without actually checking my system. I will use Mock to do this. You can read more about mocking with Pester here.

I added this to my Pester test

This is what happens when you run this test. When there is a call to Test-Path in the code you have written, instead of actually running Test-Path it will return whatever is inside the curly braces, in this case false. For Write-Warning it will return a string of Warning.

This means that I can write a test like this

So I know that when running my code in this test, Test-Path will return false, which will invoke Write-Warning in my code and in the test that will return “Warning” . So if I have written my code correctly the test will pass without actually running the real Test-Path and interacting with my system or running Write-Warning which makes it easier to test that warnings are thrown correctly.

The name of the test will also let me (and others) know in the future what I was trying to achieve. This means that if I (or someone else) changes the code and the test fails they can understand what was meant to happen. They can then either write a new test for the changed code if the requirements are now different or alter the code so that it passes the original test.

I use

so that the only red text that I see on the screen is the results of the test and not any PowerShell errors.

Asserting

I can also check that I have successfully called my Mocks using Assert-MockCalled. This command will check that a command that has been mocked has been called successfully during the test in the scope of the Describe (or in this case Context) block of the tests

I specify the command name, the number of times that I expect the mock to have been called and because I know that it will be exactly 1 time, I set exactly to $true. If I set exactly to false it would test that the mock was called at least the number of times specified. This is another test that I really have called the Mocks that I defined and the results are correct and dependant only on the code.

I set up the same test for Write-Warning.

Failed Test

I can now run my Pester tests using

and see that some failed.

02 - Failed Pester tests

Of course it failed I don’t have a function named Get-SQLDiagRecommendations

So why run the test?

I need to ensure that my test fails before I write the code to pass it. If I don’t do that I may mistakenly write a test that passes and therefore not be correctly testing my code.

You can also see that it has run all of the .Tests.ps1 files in the tests directory and has taken 42 seconds to run. The tests directory includes a number of Pester tests including checking that all of the scripts pass the Script Analyser rules and that all of the functions have the correct help. (thank you June Blender for that test)

Show

I can reduce the output of the tests using the Show parameter of Invoke-Pester. I will often use Fails as this will show the describe and context titles and only the tests that fail. This will run much quicker as it will not need to output all of the passed tests to the screen

03 - Pester show fails

Now the test is running in less than half of the time. You can filter the output in further ways using Show. You can run

to see how else you can do this.

Tags

As I am going to be writing tests and then writing code to pass the tests repeatedly I don’t want to run all of these tests all of the time so I can use the Tags parameter of Invoke-Pester to only run a certain suite tests. In the Unit.Tests.ps1 file the Describe block looks like this

So I can run just the tests tagged Unit and skip all of the other tests. Combined with the Show Fails to reduce the output my Invoke-Pester code looks like this

04 - Pester Tags

Now I am only running the tests that I need for writing the code for the command the tests are running in under half a second 🙂 This is so much better when I am going to be running them repeatedly.

The other tests have different tags and I will show them running later in the post.

Code

Finally, we can write some code to pass our failing test

Which would look like this if the file does not exist and the API Key parameter is not used

05 - Warning

I like to provide users with a useful message that they can follow rather than a lot of red text that they need to decipher

And now our tests pass

06 - Passing Tests

If you look at the API documentation the API requires a callerid as well as the APIKey. In the examples it uses the value from
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Cryptography\MachineGUID

We can get that using Get-ItemProperty and without it we can’t call the API so I wrote tests like this.

I am not saying this is the correct way to write your tests. I am showing that you can test multiple things in an It block and if any one of them fails the entire test fails.

I am mocking the internal function Get-MachineGuid and Write Warning just in the scope of this It Block and passing an APIKey parameter to Get-SQLDiagRecommendations so that we don’t hit the write-warnings we tested for above and then using Assert-VerifiableMocks  to verify that the mocks have been called. It does not verify how many times, just that all of the mocks in that block have been called

The test fails as expected and then I write the code to pass the test. This is the internal function to get the Machine GUID

 

and this is the call to the internal function and warning message

Rinse and repeat

That is basically the process that I follow to write a function. I just write a test, write some code to fix it, write another test, write some code to fix it. I keep going until I have finished writing the code and all the test have passed.

Best Practice Code

Once that was done and my Unit test had passed I run

To check that the PowerShell code that I had written conformed to the Script Analyzer rules. I added an exception to the Help.Exceptions.ps1 file to not run the rule for plural nouns as I think the command has to be called Get-SQLRecommendations with an S ! I have tagged the ScriptAnalyzer Tests with a tag so I can just run those tests.

Help

As that had all passed I could then run

Which tests if I had the correct help for my functions. Of course that failed but I could use the nifty new feature in VS Codes PowerShell Extension to add the help scaffolding really easily as I describe here

Then I could run all 563 of the Pester tests in the tests folder and be happy that everything was OK

11 - All Pester passed.PNG

By the end I had written the module, which you can find here

There are instructions and a script to install it easily.

Right now it has only got the one function to get the SQL recommendations but I will look at expanding that over the next few days and once it is more complete put it onto the PowerShell Gallery and maybe move it into the SQL Server Community GitHub Organisation  home of https://dbatools.io , https://dbareports.io, Invoke-SQLCmd2 and the SSIS Reporting pack

Contribute

Of course I am happy to have others contribute to this, in fact I encourage it. Please fork and give PR’s and make this a useful module with more commands. There is the Diagnostic Analysis API as well to work with which I am very interested to see how we can make use of that with PowerShell

As always, I highly recommend that if you want to know more about Pester you head over here and purchase this book by Adam

VSCode – PowerShell extension 1.4.0 new command Out-CurrentFile

Yesterday David Wilson announced version 1.4.0 of the PowerShell extension for VSCode

He also pointed out that there have been over 1 million installs of the extension. 🙂

If you want to install the PowerShell extension you can hit F1 in VSCode and type

or CTRL + SHIFT + X to open the extensions side bar and search for PowerShell and click the green install button.

There are a few enhancements in this release which you can read about here but I noticed the New File API and Out-CurrentFile command that were contributed by Doug Finke.

If your focus is in the editor in VSCode, you can simply CTRL + N and create a new file. You can alter the language that the file uses with CTRL + K, M (that’s CTRL and K and then M not CTRL and K and M!) or set the default new file language as I described here.

01 -new file.gif

If you are using VSCode as your daily PowerShell command line though, you would have to alter your focus from the terminal panel into the editor to do this. Now though you have

which enables you to create a new file from the terminal

02- Another new file.gif

The Out-CurrentFile command sends the output of a command through Out-String to a new file. This makes it much easier to keep the results of some commands. You don’t have to pipe them to clip or highlight and CTRL + C to copy them and then open a  file and paste them. Of course you can use Out-File and then open the file but this is another way.

Lets see how it works. In this example, I want to export the T-SQL for creating the logins on an instance and add some comments to the code before saving it somewhere safely. I am going to use the Export-SQLLogin command from the dbatools module. MVP Cláudio Silva has written a great post on that command today.

First create a new file

and then

In the gif above

  • I create a new file,
  • Export the logins to it
  • change the language of the file to T-SQL
  • remove the string quotes and
  • add some comments.

All without leaving VSCode, just another reason that my productivity is increased using VSCode!

Unfortunately, it doesn’t work so well with Pester. (Of course I was going to try Pester!). This makes sense though, as Pester uses Write-Host to display the test results so nothing is going to the output stream so

returns this

04 - pester.PNG

and if you use the -PassThru parameter then you get the $Tests object as a string so

shows

05 - pester object.PNG

Which isn’t what I would need but I have other ways of working with Pester output.

Pester for Presentations – Ensuring it goes ok

Whilst I was at PSCONFEU I presented a session on writing pester tests instead of using checklists. You can see it here

During the talk I showed the pester test that I use to make sure that everything is ready for my presentation. A couple of people have asked me about this and wanted to know more so I thought that I would blog about it.

Some have said that I might be being a little OCD about it 😉 I agree that it could seem like that but there is nothing worse than having things go wrong during your presentation. It makes your heart beat faster and removes the emphasis from the presentation that you give.

When it is things that you as a presenter could have been able to foresee, like a VM not being started or a database not being restored to the pre-demo state or being logged in as the wrong user then it is much worse

I use Pester to ensure that my environment for my presentation is as I expect and in fact, in Hanover when I ran through my Pester test for my NUC environment I found that one of my SQL Servers had decided to be in a different time zone and therefore the SQL Service would not authenticate and start. I was able to quickly remove the references to that server and save myself from a sea of red during my demos

For those that don’t know. Pester is a PowerShell module for Test Driven Development

Pester provides a framework for running unit tests to execute and validate PowerShell commands from within PowerShell. Pester consists of a simple set of functions that expose a testing domain-specific language (DSL) for isolating, running, evaluating and reporting the results of PowerShell commands

If you have PowerShell version 5 then you will have Pester already installed although you should update it to the latest version. If not you can get Pester from the PowerShell Gallery follow the instructions on that page to install it. This is a good post to start learning about Pester

What can you test? Everything. Well, specifically everything that you can write a PowerShell command to check. So when I am setting up for my presentation I check the following things. I add new things to my tests as I think of them or as I observe things that may break my presentations. Most recently that was ensuring that my Visual Studio Code session was running under the correct user. I did that like this

Describe "Presentation Test" {
    Context "VSCode" {
        It "Should be using the right username" {
            whoami | Should Be 'TheBeard\Rob'
       }
    }
}

01 - username.PNG

I think about the things that are important to me for my presentation.  I want to ensure that I only have one VS Code window open to avoid that situation where I am clicking through windows looking for the correct window. I can do that using Get-Process

It "Should have Code Insiders Open" {
(Get-Process 'Code - Insiders' -ErrorAction SilentlyContinue)| Should Not BeNullOrEmpty
}
        It "Should have One VS Code Process" {
            (Get-Process 'Code - Insiders' -ErrorAction SilentlyContinue).Count | Should Be 1
        }

I use -ErrorAction SilentlyContinue so that I don’t get a sea of red when I run the tests. Next I want to check my PowerPoint is ready for my presentation

[code langauge=”PowerShell”]
It "Should have PowerPoint Open" {
(Get-Process POWERPNT -ErrorAction SilentlyContinue).Count | Should Not BeNullOrEmpty
}
It "Should have One PowerPoint Open" {
(Get-Process POWERPNT -ErrorAction SilentlyContinue).Count | Should Be 1
}
It "Should have the correct PowerPoint Presentation Open" {
(Get-Process POWERPNT -ErrorAction SilentlyContinue).MainWindowTitle| Should Be ‘dbatools – SQL Server and PowerShell together – PowerPoint’
}

Again I use Get-Process. I check if PowerPoint is open, if there is one PowerPoint open and I use the MainWindowTitle property to check that it is the right PowerPoint presentation after nearly starting a presentation for SqlServer module with the dbatools slides!

I don’t want any distractions when I am presenting. I have the sort of friends who will notice if I get notifications for twitter popping up on my screen and repeatedly send tweets to make people laugh. (I admit, I’m one of those friends – I do this too!)

02 - Friends!!.PNG

Now I cannot get a test for quiet hours working. You can apparently use a Registry key, which of course you can check with PowerShell but I was unable to get it working. I haven’t looked at testing for Presentation Mode  but I test that those programmes are shut down, again using Get-Process

        It "Mail Should be closed" {
            (Get-Process HxMail -ErrorAction SilentlyContinue).Count | Should Be 0
        }
        It "Tweetium should be closed" {
            (Get-Process WWAHost -ErrorAction SilentlyContinue).Count | Should Be 0
        }
        It "Slack should be closed" {
            (Get-Process slack* -ErrorAction SilentlyContinue).Count | Should BE 0
        }

I am generally presenting with SQL Server so I need to make sure that SQL Server is running. I do this with Get-Service

Context "Local SQL" {
        It "DBEngine is running" {
            (Get-Service mssqlserver).Status | Should Be Running
        }
        It "SQL Server Agent is running" {
            (Get-Service sqlserveragent).Status | Should Be Running
        }
        It "DAVE DBEngine is running" {
            (Get-Service mssql*Dave).Status | Should Be Running
        }
        It "DAVE Agent is running" {
            (Get-Service sqlagent*dave).Status | Should Be Running
        }
    }

In this example I am testing that the SQL Service and the Agent service are running on both of my local instances.

I use a NUC running Hyper-V to enable me to show a number of SQL Servers running in a domain environment so I need to be able to test those too. I set the values of the servers I need into a variable and check that the VM is running and that they respond to ping

 Context "VM State" {
        $NUCServers = 'BeardDC1','BeardDC2','LinuxvNextCTP14','SQL2005Ser2003','SQL2012Ser08AG3','SQL2012Ser08AG1','SQL2012Ser08AG2','SQL2014Ser12R2','SQL2016N1','SQL2016N2','SQL2016N3','SQLVnextN1','SQL2008Ser12R2'
        $NUCVMs = Get-VM -ComputerName beardnuc | Where-Object {$_.Name -in $NUCServers}
            foreach($VM in $NUCVms)
                {
                $VMName = $VM.Name
                  It "$VMName Should be Running"{
                    $VM.State | Should Be 'Running'
                  }
			    }
    }
Context "THEBEARD_Domain" {
            $NUCServers = 'BeardDC1','BeardDC2','LinuxvNextCTP14','SQL2005Ser2003','SQL2012Ser08AG3','SQL2012Ser08AG1','SQL2012Ser08AG2','SQL2014Ser12R2','SQL2016N1','SQL2016N2','SQL2016N3','SQLVnextN1','SQL2008Ser12R2'
            foreach($VM in $NUCServers)
                {
                                 It "$VM Should respond to ping" {
				(Test-Connection -ComputerName $VM -Count 1 -Quiet -ErrorAction SilentlyContinue) | Should be $True
				}
                }
    }

I also need to check if the SQL Service and the Agent Service is running on each server

  Context "SQL State" {
        $SQLServers = (Get-VM -ComputerName beardnuc | Where-Object {$_.Name -like '*SQL*'  -and $_.State -eq 'Running'}).Name
        foreach($Server in $SQLServers)
        {
          $DBEngine = Get-service -ComputerName $Server -Name MSSQLSERVER
           It "$Server  DBEngine should be running" {
                $DBEngine.Status | Should Be 'Running'
            }
           It "$Server DBEngine Should be Auto Start" {
            $DBEngine.StartType | Should be 'Automatic'
           }
              $Agent= Get-service -ComputerName $Server -Name SQLSERVERAGENT
              It "$Server Agent should be running" {
                  $Agent.Status | Should Be 'Running'
           }
           It "$Server Agent Should be Auto Start" {
            $Agent.StartType | Should be 'Automatic'
           }
        }
        It "Linux SQL Server should be accepting connections" {
            $cred = Import-Clixml C:\temp\sa.xml
            {Connect-DbaSqlServer -SqlServer LinuxvnextCTP14 -Credential $cred -ConnectTimeout 60} | Should Not Throw
        }

    }
}

I check that the Linux SQL Server is available by storing the credential using Export-CliXML  and then use that credential with Connect-DbaSqlServer from dbatools

Using a NUC means I sometimes have fun with networking so I have a couple of tests for that too. Testing for the correct DNS Servers and gateways

    It "Should have DNS Servers for correct interface" {
        (Get-DnsClientServerAddress -InterfaceAlias 'Ethernet 3').Serveraddresses | Should Be @('10.0.0.1','10.0.0.2')
    }
    It "Should have correct gateway for alias"{
        (Get-NetIPConfiguration -InterfaceAlias 'Ethernet 3').Ipv4DefaultGateway.NextHop | Should Be '10.0.0.10'
    }

All of those are generic tests that have evolved over time and are run for every presentation but when I have specific things you require for a single presentation I test for those too.

For Example, later this week Cláudio Silva and I are presenting on dbatools at TUGAIT  We are showing the Test-DbaMaxMemory  , Get-DbaMaxMemory and Set-DbaMaxMemory commands so we need to ensure that the Max Memory for some servers is (In) Correctly set. I use Connect-DbaSqlServer to create an SMO Server object and test that

    It "Max Memory on SQl2012SerAG1 2 and 3 should be 2147483647" {
        (Connect-DbaSqlServer SQL2012Ser08AG1).Configuration.MaxServerMemory.RunValue | Should Be 2147483647
        (Connect-DbaSqlServer SQL2012Ser08AG2).Configuration.MaxServerMemory.RunValue | Should Be 2147483647
        (Connect-DbaSqlServer SQL2012Ser08AG3).Configuration.MaxServerMemory.RunValue | Should Be 2147483647
    }

We are also showing the Test-DbaIdentityUsage command so a column needs to be pre-prepared in AdventureWorks2014 to be able to show the error

    It "ShiftID LastValue Should be 255" {
        $a = Test-DbaIdentityUsage -SqlInstance ROB-XPS -Databases AdventureWorks2014 -NoSystemDb
        $a.Where{$_.Column -eq 'ShiftID'}.LastValue | should Be 255
    }

To ensure that we have orphaned files available for the Find-DbaOrphanedFile command I use this

    It "has Orphaned Files ready"{
        (Find-DbaOrphanedFile -SqlServer SQL2016N2).Count | Should Be 30
    }

There are any number of things that you may want to test to ensure that, as best as possible, the demo gods are not going to come and bite you in the middle of your presentation.

  • Files or Folders exist (or dont exist)
  • Databases, Agent Jobs, Alerts
  • Operators, Logins
  • SSIS packages, SSRS Reports
  • PowerBi files
  • Azure connectivity
  • Azure components

The list is endless, just look at what you require for your presentation.

Anything you can check with PowerShell you can test with Pester so build up your Pester presentation tests and reduce the reliance on the demo gods! I’ll still leave this here just to be safe!!

pray to the demo gods.jpg

Pester Test Inception and the Show Parameter

My fantastic friend Andre Kamman b | t  and I presented at PSConfEu last week

C_EDtK0XoAA1PL7 (2).jpg

and whilst we were there we were chatting about running Pester Tests. He wanted to know how he could run a Pester Test and not lose the failed tests as they scrolled past him. In his particular example we were talking about running hundreds of tests on thousands of databases on hundreds of servers

01 - pesters.gif

I guess it looks something like that!!

I explained about the -Show parameter which allows you to filter the results that you see. Using Get-Help Invoke-Pester you can see this

   -Show
Customizes the output Pester writes to the screen. Available options are None, Default,
Passed, Failed, Pending, Skipped, Inconclusive, Describe, Context, Summary, Header, All, Fails.

The options can be combined to define presets.
Common use cases are:

None – to write no output to the screen.
All – to write all available information (this is default option).
Fails – to write everything except Passed (but including Describes etc.).

A common setting is also Failed, Summary, to write only failed tests and test summary.

This parameter does not affect the PassThru custom object or the XML output that
is written when you use the Output parameters.

Required?                    false
Position?                    named
Default value                All
Accept pipeline input?       false
Accept wildcard characters?  false

So there are numerous options available to you. Lets see what they look like

I will use a dummy test which creates 10 Context blocks and runs from 1 to 10 and checks if the number has a remainder when divided by 7

Describe "Only the 7s Shall Pass" {
    $Servers = 0..10
    foreach($Server in $servers)
    {
        Context "This is the context for $Server" {
        foreach($A in 1..10){
            It "Should Not Pass for the 7s" {
                $A % 7 | Should Not Be 0
                }
            }
        }
    }
}

Imagine it is 10 servers running 10 different tests

For the Show parameter All is the default, which is the output that you are used to

02 - All.gif

None does not write anything out. You could use this with -Passthru which will pass ALL of the test results to a variable and if you added -OutputFile and -OutputFormat then you can save ALL of the results to a file for consumption by another system. The -Show parameter only affects the output from the Invoke-Pester command to the host not the output to the files or the variable.

Header only returns the header from the test results and looks like this ( I have included the none so that you can see!)

03 - none and header.PNG

Summary, as expected returns only the summary of the results

04 - summary.PNG

You can use more than one value for the Show parameter so if you chose Header, Summary, Describe you would get this

05 - headerdesscribe sumnmary.PNG

You could use Failed to only show the failed tests which looks like this

06 - failed.PNG

but Andre explained that he also want to be able to see some progress whilst the test was running. If there were no failures then he would not se anything at all.

So Fails might be the answer (or Failed and Summary but that would not show the progress)

07 - fails.PNG

Fails shows the Header, Describe, Context  and also shows the Summary.

However we carried on talking. PSConfEU is a fantastic place to talk about PowerShell 🙂 and wondered what would happen if you invoked Pester from inside a Pester test. I was pretty sure that it would work as Pester is just PowerShell but I thought it would be fun to have a look and see how we could solve that requirement

So I created 3 “Internal Tests” these are the ones we don’t want to see the output for. I then wrote an overarching Pester test to call them. In that Pester test script I assigned the results of each test to a variable which. When you examine it you see

08 - Pester Object.PNG

The custom object that is created shows the counts of all different results of the tests, the time it took and also the test result.

So I could create a Pester Test to check the Failed Count property of that Test result

$InternalTest1.FailedCount | Should Be 0

To make sure that we don’t lose the results of the tests we can output  them to a file like this

$InternalTest1 = Invoke-Pester .\Inside1.Tests.ps1 -Show None -PassThru -OutputFile C:\temp\Internal_Test1_Results.xml -OutputFormat NUnitXml

So now we can run Invoke-Pester and point it at that file and it will show the progress and the final result on the screen.

09 finale.PNG

You could make use of this in different ways

  • Server 1
    • Database1
    • Database2
    • Database3
    • Database4
  • Server 2
    • Database1
      Database2
      Database3
      Database4
  • Server 3
    • Database1
      Database2
      Database3
      Database4

Or by Test Category

  • Backup
    • Server1
    • Server 2
    • Server 3
    • Server 4
  • Agent Jobs
    • Server 1
    • Server 2
    • Server 3
    • Server 4
  • Indexes
    • Server 1
    • Server 2
    • Server 3
    • Server 4

Your only limitation is your imagination.

As we have mentioned PSConfEU you really should check out the videos on the youtube channel All of the videos that were successfully recorded will be on there. You could start with this one and mark your diaries for April 16-20 2018