New .NET Notebooks are here – PowerShell 7 notebooks are here.

Data Science folk used Notebooks for documentation and to show re-runnable research. Azure Data Studio included this notebook functionality and added SQL kernel where with a little bit of faffing you could run PowerShell and then a Python kernel that enabled PowerShell. It seems that notebooks are so cool that everyone is creating them these days! I was browsing twitter when I saw this tweet.

PowerShell 7 Notebooks 🙂

A notebook experience for PowerShell 7 that sounds amazing. This will enable a true cross-platform PowerShell Notebook experience which is lacking from the Python version as it uses Windows PowerShell on Windows and PowerShell Core on other OS’s

The first thing I asked was – Will this come to Azure Data Studio. I got an immediate response from Sydney Smith PowerShell Program Manager saying it is on the roadmap

Install dependencies

To be able to run the notebook, you need to install some dependencies. First install the .NET CORE SDK which you can download from https://dotnet.microsoft.com/download This needs admin permissions to install.

You also need a Python installation – You can use Anaconda, which you can download from here https://www.anaconda.com/distribution/ This does not need admin to install

Add Anaconda to Windows Terminal

I have added the Anaconda prompt to Windows Terminal so that I have one entry point for all my CLIs. Open the settings file and add the code below. (It will also give you an icon and background.

        {
            // Make changes here to the Anaconda.exe profile
            "guid": "{0caa0dad-35be-5f56-a7ff-afceeeaa6101}",
            "name": "Anaconda",
            "commandline": "cmd.exe /K C:\\Users\\mrrob\\Anaconda3\\Scripts\\activate.bat",
            "hidden": false,
            "backgroundImage": "C:\\Users\\mrrob\\Anaconda3\\Menu\\anaconda-navigator.ico",
            "icon": "C:\\Users\\mrrob\\Anaconda3\\Menu\\anaconda-navigator.ico",
            "backgroundImageAlignment": "topRight",
            "backgroundImageStretchMode": "uniform",
            "backgroundImageOpacity": 0.1
        }

and it appears in the drop down

With Anaconda installed, check that that the kernel is available on your path. If like me you have Azure Data Studio installed, you will have additional kernels but the important one line here is

python3 C:\Users\USERNAME\Anaconda3\share\jupyter\kernels\python3

In Windows Terminal move to a PowerShell 7 prompt and install the dotnet interactive tool

dotnet tool install --global Microsoft.dotnet-interactive

Then you can install the .NET kernel in your Anaconda prompt using this command

dotnet interactive jupyter install

Sometimes new things have errors

I had an error when I tried this first time

Could not execute because the specified command or file was not found.
Possible reasons for this include:
* You misspelled a built-in dotnet command.
* You intended to execute a .NET Core program, but dotnet-interactive does not exist.
* You intended to run a global tool, but a dotnet-prefixed executable with this name could not be found on the PATH.

This is easily fixed by adding %USERPROFILE%\.dotnet\tools to my path with set PATH=%PATH%;%USERPROFILE%\.dotnet\tools

Running jupyter kernelspec list shows that the .NET kernel is installed for C Sharp, F Sharp and .NET PowerShell

Lets open a Notebook

Now you want to play with it!
You can run the lab environment using `jupyter lab`

This opens a browser

You can open existing Azure Data Studio PowerShell notebooks (but not SQL ones)

Sometimes new things have errors Part 2

Unfortunately, I get errors when trying to import Pester which means I can not use my dbachecks notebooks in this blog post. I have raised an issue on the repo here.

Create a New Notebook

But it is easy to create a new Notebook

In the launcher page click the .NET PowerShell button


Which will open a new Notebook in the directory that you launched the lab from. You can then add Code or Markdown as I have described before here.

Then you can add code, markdown and images to create your notebook.

Once you have finished using the notebook lab, you can shut it down in the Anaconda prompt with CTRL + C

Here is a video of running a notebook which anyone can use to create a couple of Docker containers running SQL 2019 and query them with dbatools. You can find the notebook further down this post.

Sharing Notebooks

You can create notebooks to run common tasks. Even better, from the lab you can convert the notebook including the results to a variety of formats to share with other none-technical people. I used this functionality this week to export Azure Data Studio Notebooks to HTML and PDF for a Project manager 🙂

You can find the Export Notebook command in the File menu

Exporting to HTML did not export the images but it does include the results

You can share notebooks via GitHub – Either in a gist like this

or by providing a straight link to the notebook in GitHub https://github.com/SQLDBAWithABeard/Notebooks/blob/master/notebooks/Exploring%20dbatools.ipynb

You can also use Binder https://mybinder.org/

This uses Docker to create an interactive Notebook. Create a Github repo like https://github.com/SQLDBAWithABeard/Notebooks (or just clone it) Copy your notebooks into the notebooks folder and push the changes to Github and then go to https://mybinder.org/ and add your URL to the repository.

You can see what it looks like by clicking the button below which Binder creates for you

Unfortunately the kernel only supports Python for the moment but you can see the possibilities 🙂

How to fork a GitHub repository and contribute to an open source project

I enjoying maintaining open source GitHub repositories such as dbachecks and ADSNotebook. I absolutely love it when people add more functionality to them.

To collaborate with a repository in GitHub you need to follow these steps

  • Fork the repository into your own GitHub
  • Clone the repository to your local machine
  • Create a new branch for your changes
  • Make some changes and commit them with useful messages
  • Push the changes to your repository
  • Create a Pull Request from your repository back to the original one

You will need to have git.exe available which you can download and install from https://git-scm.com/downloads if required

Fork the repository into your own GitHub

A fork is a copy of the original repository. This allows you to make changes without affecting the original project. It does not get updated when the original project gets updated (We will talk about that in the next post) This enables you to code a new feature or a bug fix, test it locally and make sure it is working.

Let’s take dbachecks as our example. Start by going to the project in GiHub. In this case the URL is https://github.com/sqlcollaborative/dbachecks You will see a Fork button at the top right of the page

When you click the button the repository is copied into your own GitHub account

The page will open at https://github.com/YOURGITHUBUSERNAME/NameOfRepository in this case https://github.com/SQLDBAWithABeard/dbachecks You will be able to see that it is a fork of the original repository at the top of the page

Clone the repository to your local machine

Forking the repository has created a remote repository stored on the GitHub servers. Now that the repository has been forked you need to clone it to your local machine to create a local repository so that you can start coding your amazing fix. When you have finished you can then sync it back to your remote repository ready for a Pull Request back to the original repository.

In your browser, at your remote repository that you just created (https://github.com/YOURGITHUBUSERNAME/NameOfRepository if you have closed the page) click on Clone or Download and then the icon to the right to copy the url

You can clone your repository in VS Code or Azure Data Studio by clicking F1 or CTRL + SHIFT + P in Windows or Linux and ⇧⌘P or F1 on a Mac

then start typing clone until you see Git:Clone and press enter or click

Paste in the URL that you just copied and click enter. A dialog will open asking you to select a folder. This is the parent directory where your local repository will be created. The clone will create a directory for your repository so you do not need to. I suggest that you use a folder called GitHub or something similar to place all of the repositories that you are going to clone and create.

When it has finished it will ask you if you wish to open the repository

if you click Open it will close anything that you have already got opened and open the folder. If you click Add to Workspace it will add the folder to the workspace and leave everything you already had open as it was and surprisingly clicking Open in New Window will open the folder in a new instance of Visual Studio Code or Azure Data Studio!

and you will also be able to see the local repository files on your computer

You can clone the repository at the command line if you wish by navigating to your local GitHub directory and running git clone TheURLYouCopied

Now your local repository has been created, it’s time to do your magic coding.

Create a new branch for your changes

It is a good idea to create a branch for your amazing new feature This enables you to work on coding for that feature in isolation. It has the added advantage that if you mess it right royally up, you can just delete that branch and start again with a new one!

To create a branch in VS Code or Azure Data Studio you can click on the branch name at the bottom left.

Or open the Command Palette and type Branch until you see Git: Create Branch

You will be prompted for a branch name

I like to choose a name that relates to the code that I am writing like configurable_engine or removeerroringexample You can see the name of the branch in the bottom left so that you always know which branch you are working on.

The icon shows that the branch is only local and hasn’t been pushed (published) to the remote repository yet

Make some changes and commit them with useful messages

Now you can start writing your code for your awesome new feature, bug fix or maybe just documentation improvement. Keep your commits small and give them useful commit messages that explain why you have made the change as the diff tooling will be able to show what change you have made

Write your code or change the documentation, save the file and in Visual Studio Code or Azure Data Studio you will see that the source control icon has a number on it

Clicking on the icon will show the files that have changes ready

You can write your commit message in the box and click CTRL + ENTER to commit your changes with a message

If you want to do this at the command line, you can use git status to see which files have changes

You will need to git add .or git add .\pathtofile to stage your changes ready for committing and then git commit -m 'Commit Message' to commit them

Notice that I did exactly what I just said not to do! A better commit message would have been So that people can find the guide to forking and creating a PR

Push the changes to your repository

You only have the changes that you have made in your local repository on your computer. Now you need to push those changes to Github your remote repository. You can click on the publish icon

You will get a pop-up asking you if you wish to stage your changes. I click Yes and never Always so that I can use this prompt as a sanity check that I am doing the right thing

At the command line you can push the branch, if you do that, you will have to tell git where the branch needs to go. If you just type git push it will helpfully tell you

fatal: The current branch AwesomeNewFeature has no upstream branch.
To push the current branch and set the remote as upstream, use

    git push --set-upstream origin AwesomeNewFeature

So you will need to use that command

You can see in the bottom left that the icon has changed

and if you read the output of the git push command you will see what the next step is also.

Create a Pull Request from your repository back to the original one

You can CTRL click the link in the git push output if you have pushed from the command line or if you visit either you repository or the original repository in your browser you will see that there is a Compare and Pull Request button

You click that and let GitHub do its magic

and it will create a Pull Request for you ready for you to fill in the required information, ask for reviewers and other options. Once you have done that you can click Create pull request and wait for the project maintainer to review it and (hopefully) accept it into their project

You can find the Pull Request that I created here https://github.com/sqlcollaborative/dbachecks/pull/720 and see how the rest of this blog post was created.

If you make more changes to the code in the same branch in your local repository and push them, they will automatically be added to this Pull Request whilst it is open. You can do this if the maintainer or reviewer asks for changes.

Shane has asked for a change

So I can go to my local repository in Azure Data Studio and make the requested change and save the file. If I look in the source control in Azure Data Studio I can again see there is a change waiting to be committed and if I click on the name of the file I can open the diff tool to see what the change was

Once I am happy with my change I can commit it again in the same way as before either in the editor or at the command line. The icon at the bottom will change to show that I have one commit in my local repository waiting to be pushed

To do the same thing at the command line I can type git status and see the same thing.

I can then push my change to my remote repository either in the GUI or by using git push

and it will automatically be added to the Pull Request as you can see

Now that the required changes for the review have been made, the review has been approved by Shane and the pull request is now ready to be merged. (You can also see that dbachecks runs some checks against the code when a Pull Request is made)

Many, many thanks to Shane b | t who helped with the writing of this post even whilst on a “no tech” holiday.

Go Ahead – Contribute to an Open Source Project

Hopefully you can now see how easy it is to create a fork of a GitHub repository, clone it to your own machine and contribute. There are many open source projects that you can contribute to.

You can use this process to contribute to the Microsoft Docs for example by clicking on the edit button on any page.

You can contribute other open source projects like

or go and find the the ones that you use and can help with.

Fixing the ‘Failed to generate the compressed file for module ‘C:\Program Files\dotnet\dotnet.exe’ error when deploying to the PowerShell Gallery using Azure DevOps

The PowerShell module for validating your SQL Server estate dbachecks is deployed via Azure DevOps, you can see how it is working (or not) via this link

Grrr Automation for the Lose!

Until recently, this had worked successfully. In the last few weeks I have been receiving errors

 Exception : Microsoft.PowerShell.Commands.WriteErrorException: Failed to generate the compressed file for module 'C:\Program Files\dotnet\dotnet.exe failed to pack: error 
 C:\Program Files\dotnet\sdk\3.0.100\Sdks\NuGet.Build.Tasks.Pack\build\NuGet.Build.Tasks.Pack.targets(198,5): error : 
2 Index was outside the bounds of the array. 
 [C:\Users\VssAdministrator\AppData\Local\Temp\cbc14ba6-5832-46fd-be89-04bb552a83ac\Temp.csproj]
'.
At C:\Program Files\WindowsPowerShell\Modules\PowerShellGet\2.2.1\PSModule.psm1:10944 char:17
20       Publish-PSArtifactUtility @PublishPSArtifactUtility_Param ...
              ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo          : InvalidOperation: (:) [Write-Error], WriteErrorException
2019-11-25T22:44:46.8459493Z     + FullyQualifiedErrorId : FailedToCreateCompressedModule,Publish-PSArtifactUtility

You can see these errors in the release pipeline logs here

Confusion

This was very frustrating as it was stopping the continuous delivery to the PowerShell Gallery. It was even more confusing as I was successfully deploying the ADSNotebook module to the gallery using the same method as you can see here.

Raise an Issue on GitHub

I went and looked at the PowerShellGet GitHub repository and opened an issue I also found another issue regarding Required Modules

But this doesnt help to get dbachecks released.

Just Try to Make it Work

I asked the wonderful folk in the PowerShell Slack channel – Through the magic of automation, you can also interact with them via the powershellhelp channel in the SQL Server Slack as well but there were no answers that could assist.

So I had to go searching for an answer. PowerShellGet uses nuget for package management. I found that if I downloaded an earlier version and placed it in my user profile (in the right location) I could publish the module.

I found this out by removing the nuget.exe from anywhere useful on the machine and trying to publish the module. The error message says

NuGet.exe upgrade is required to continue
This version of PowerShellGet requires minimum version '4.1.0' of NuGet.exe to publish an item to the NuGet-based repositories. NuGet.exe must be available in 
'C:\ProgramData\Microsoft\Windows\PowerShell\PowerShellGet\' or 'C:\Users\BeardyMcBeardFace\AppData\Local\Microsoft\Windows\PowerShell\PowerShellGet\', or under 
one of the paths specified in PATH environment variable value. NuGet.exe can be downloaded from https://aka.ms/psget-nugetexe. For more information, see 
https://aka.ms/installing-powershellget . Do you want PowerShellGet to upgrade to the latest version of NuGet.exe now?

If I said yes then I got the latest version and the error continued.

However, on my laptop I can go to the nuget downloads page and download an earlier version and place it in one of those paths then I could publish the module.

Can I Automate it?

I would rather not have to deploy manually though, and as I use hosted agents my access to the operating system is limited so I wondered if I could place the nuget.exe in the user profile and it would get used or if it would look for the the latest one. Turns out it uses the one in the user profile 🙂

So now I have this code as a step in my Azure DevOps Release pipeline before calling Publish-Module and we have automated the releases again.

and now deployments to the PowerShell Gallery are just triggered by the build and the pipeline is green again 🙂

Whats a SQL Notebook in Azure Data Studio?

Azure Data Studio is a cross-platform database tool for data professionals using the Microsoft family of on-premises and cloud data platforms on Windows, MacOS, and Linux.

Recently Vicky Harp tweeted

By the way, you can watch a recording from SQLBits of Vicky’s session


So in the interest of learning about something new I decided to give it a try.

Install The Insiders Edition

Unlike Visual Studio Code which has a link to the insiders download on the front page, you will have to visit the GitHub repository for the links to download the insiders release of Azure Data Studio. Scroll down and you will see

Try out the latest insiders build from master:

See the change log for additional details of what’s in this release.

Once you have installed you can connect to an instance, right click and choose New Notebook or you can use File – New Notebook

Incidentally, I use the docker-compose file here to create the containers and I map C:\MSSQL\BACKUP\KEEP on my local machine (where my backups are) to /var/opt/mssql/backups on the containers on lines 10 and 17 of the docker-compose so change as required . If you want to follow along then put the ValidationResults.bak in the folder on your local machine.
The Create-Ag.ps1 shows the code and creates an AG with dbatools. But I digress!

Install Notebook Dependencies

Once you click New Notebook you will get a prompt to install the dependencies.

It will show its output

and take a few minutes to run

It took all but 11 minutes on my machine

#

Create a Notebook

OK, so now that we have the dependencies installed we can create a notebook. I decided to use the ValidationResults database that I use for my dbachecks demos and describe here. I need to restore it from my local folder that I have mapped as a volume to my container. Of course, I use dbatools for this 🙂

# U: sqladmin P: dbatools.IO
$cred = Get-Credential
$restoreDbaDatabaseSplat = @{
    SqlInstance = $sqlinstance1
    SqlCredential = $cred
    UseDestinationDefaultDirectories = $true
    Path = '/var/opt/mssql/backups/ValidationResults.bak'
}
Restore-DbaDatabase @restoreDbaDatabaseSplat

I had already got a connection saved to the instance in Azure Data Studio, you may need to create a new one using the new connection icon at the top left and filling in the details. The password is in the code above.



Now I can start with my notebook. I am faced with this



I click on text and provide an intro


Once I had written that and clicked out, I couldn’t see what to do straight away!

Then I saw the code and text buttons at the top 🙂 Right, lets get on with it 🙂 I hit the code button and paste in the T-SQL to reset the dates in the database to simulate dbachecks having been run this morning.


There’s a run cell button on the right and when I press it

Cool 🙂

If the SQL query has results then they are shown as well

This is fun and I can see plenty of uses for it. Go and have a play with SQL notebooks 🙂

Source Control

I used CTRL K, CTRL O to open a folder and saved my notebook in my local Presentations folder which is source controlled. When I opened the explorer CTRL + SHIFT + E I can see that the folder and the file are colour coded green and have a U next to them marking them as Untracked. I can also see that the source control icon has a 1 for the number of files with changes and in the bottom left that I am in the master branch.

If I click on the source control icon (or CTRL + SHIFT + G) I can see the files with the changes and can enter a commit message

I then press CTRL + ENTER to commit my change and get this pop-up


As I only have one file and it has all the changes for this commit I click yes. If I had changed more than one file and only wanted to commit a single one at a time I would hover my mouse over the file and click the + to stage my change.



If I make a further change to the notebook and save it, I can see that the source control provider recognises the change but this time the folder the file is in and the file are colour coded brown with an M to show that they have been modified.

Unlike Visual Studio Code, when you then click on the source control icon and click on the change it does not show the differences in the notebook although this works with SQL files.

When I have made all my changes and committed them with good commit messages


I can see that there are 3 local changes ready to be pushed to by remote repository (GitHub in this case) and 0 remote commits in this branch by looking at the bottom left

I can click on the “roundy roundy” icon (I dont know its proper name 😊) and synchronise my changes. This comes with a pop-up

Personally I never press OK, Don’t Show Again because I like the double check and to think “Is this really what I want to do right now”. Once I press OK my changes will be synched with the remote repository. Explaining this means that you can find the notebook I have used in my Presentations GitHub Repository which means that you can run the Notebook too using the docker-compose file here and the instructions further up in the post.

Using Docker to run Integration Tests for dbachecks

My wonderful friend André Kamman wrote a fantastic blog post this week SQL Server Container Instances via Cloudshell about how he uses containers in Azure to test code against different versions of SQL Server.

It reminded me that I do something very similar to test dbachecks code changes. I thought this might make a good blog post. I will talk through how I do this locally as I merge a PR from another great friend Cláudio Silva who has added agent job history checks.

GitHub PR VS Code Extension

I use the GitHub Pull Requests extension for VS Code to work with pull requests for dbachecks. This enables me to see all of the information about the Pull Request, merge it, review it, comment on it all from VS Code

I can also see which files have been changed and which changes have been made

Once I am ready to test the pull request I perform a checkout using the extension

This will update all of the files in my local repository with all of the changes in this pull request

You can see at the bottom left that the branch changes from development to the name of the PR.

Running The Unit Tests

The first thing that I do is to run the Unit Tests for the module. These will test that the code is following all of the guidelines that we require and that the tests are formatted in the correct way for the Power Bi to parse. I have blogged about this here and here and we use this Pester in our CI process in Azure DevOps which I described here.

I navigate to the root of the dbachecks repository on my local machine and run

$testresults = Invoke-Pester .\tests -ExcludeTag Integration -Show Fails -PassThru 

and after about a minute

Thank you Cláudio, the code has passed the tests 😉

Running Some Integration Tests

The difference between Unit tests and Integration tests in a nutshell is that the Unit tests are testing that the code is doing what is expected without any other external influences whilst the Integration tests are checking that the code is doing what is expected when running on an actual environment. In this scenario we know that the code is doing what is expected but we want to check what it does when it runs against a SQL Server and even when it runs against multiple SQL Servers of different versions.

Multiple Versions of SQL Server

As I have described before my friend and former colleague Andrew Pruski b | t has many resources for running SQL in containers. This means that I can quickly and easily create fresh uncontaminated instances of SQL 2012, 2014, 2016 and 2017 really quickly.

I can create 4 instances of different versions of SQL in (a tad over) 1 minute. How about you?

Imagine how long it would take to run the installers for 4 versions of SQL and the pain you would have trying to uninstall them and make sure everything is ‘clean’. Even images that have been sysprep’d won’t be done in 1 minute.

Docker Compose Up ?

So what is this magic command that has enabled me to do this? docker compose uses a YAML file to define multi-container applications. This means that with a file called docker-compose.yml like thish

version: '3.7'

services:
    sql2012:
        image: dbafromthecold/sqlserver2012dev:sp4
        ports:  
          - "15589:1433"
        environment:
          SA_PASSWORD: "Password0!"
          ACCEPT_EULA: "Y"
    sql2014:
        image: dbafromthecold/sqlserver2014dev:sp2
        ports:  
          - "15588:1433"
        environment:
          SA_PASSWORD: "Password0!"
          ACCEPT_EULA: "Y"
    sql2016:
        image: dbafromthecold/sqlserver2016dev:sp2
        ports:  
          - "15587:1433"
        environment:
          SA_PASSWORD: "Password0!"
          ACCEPT_EULA: "Y"
    sql2017:
        image: microsoft/mssql-server-windows-developer:2017-latest
        ports:  
          - "15586:1433"
        environment:
          SA_PASSWORD: "Password0!"
          ACCEPT_EULA: "Y"

and in that directory just run

docker-compose up -d

and 4 SQL containers are available to you. You can interact with them via SSMS if you wish with localhost comma PORTNUMBER. The port numbers in the above file are 15586, 15587,15588 and 15589

Now it must be noted, as I describe here that first I pulled the images to my laptop. The first time you run docker compose will take significantly longer if you haven’t pulled the images already (pulling the images will take quite a while depending on your broadband speed)

Credential

The next thing is to save a credential to make it easier to automate. I use the method described by my PowerShell friend Jaap Brasser here. I run this code

$CredentailPath = 'C:\MSSQL\BACKUP\KEEP\sacred.xml'
Get-Credential | Export-Clixml -Path $CredentialPath

and then I can create a credential object using

$cred = Import-Clixml $CredentailPath 

Check The Connections

I ensure a clean session by removing the dbatools and dbachecks modules and then import the local version of dbachecks and set some variables

$dbacheckslocalpath = 'GIT:\dbachecks\'
Remove-Module dbatools, dbachecks -ErrorAction SilentlyContinue
Import-Module $dbacheckslocalpath\dbachecks.psd1
$cred = Import-Clixml $CredentailPath 
$containers = 'localhost,15589', 'localhost,15588', 'localhost,15587', 'localhost,15586'

Now I can start to run my Integration tests. First reset the dbachecks configuration and set some configuration values

# run the checks against these instances
$null = Set-DbcConfig -Name app.sqlinstance $containers
# We are using SQL authentication
$null = Set-DbcConfig -Name policy.connection.authscheme -Value SQL
# sometimes its a bit slower than the default value
$null = Set-DbcConfig -Name policy.network.latencymaxms -Value 100 # because the containers run a bit slow!

Then I will run the dbachecks connectivity checks and save the results to a variable without showing any output

$ConnectivityTests = Invoke-DbcCheck -SqlCredential $cred -Check Connectivity -Show None -PassThru

I can then use Pester to check that dbachecks has worked as expected by testing if the failedcount property returned is 0.

Describe "Testing the checks are running as expected" -Tag Integration {
    Context "Connectivity Checks" {
        It "All Tests should pass" {
            $ConnectivityTests.FailedCount | Should -Be 0 -Because "We expect all of the checks to run and pass with default settings"
        }
    }
}

What is the Unit Test for this PR?

Next I think about what we need to be testing for the this PR. The Unit tests will help us.

Choose some Integration Tests

This check is checking the Agent job history settings and the unit tests are

  • It “Passes Check Correctly with Maximum History Rows disabled (-1)”
  • It “Fails Check Correctly with Maximum History Rows disabled (-1) but configured value is 1000”
  • It “Passes Check Correctly with Maximum History Rows being 10000”
  • It “Fails Check Correctly with Maximum History Rows being less than 10000”
  • It “Passes Check Correctly with Maximum History Rows per job being 100”
  • It “Fails Check Correctly with Maximum History Rows per job being less than 100”

So we will check the same things on real actual SQL Servers. First though we need to start the SQL Server Agent as it is not started by default. We can do this as follows

docker exec -ti integration_sql2012_1 powershell start-service SQLSERVERAGENT
docker exec -ti integration_sql2014_1 powershell start-service SQLSERVERAGENT
docker exec -ti integration_sql2016_1 powershell start-service SQLSERVERAGENT
docker exec -ti integration_sql2017_1 powershell start-service SQLSERVERAGENT

Unfortunately, the agent service wont start in the SQL 2014 container so I cant run agent integration tests for that container but it’s better than no integration tests.

This is What We Will Test

So we want to test if the check will pass with default settings. In general, dbachecks will pass for default instance, agent or database settings values by default.

We also want the check to fail if the configured value for dbachecks is set to default but the value has been set on the instance.

We want the check to pass if the configured value for the dbachecks configuration is set and the instance (agent, database) setting matches it.

If You Are Doing Something More Than Once ……

Let’s automate that. We are going to be repeatedly running those three tests for each setting that we are running integration tests for. I have created 3 functions for this again checking that FailedCount or Passed Count is 0 depending on the test.

function Invoke-DefaultCheck {
    It "All Checks should pass with default for $Check" {
        $Tests = get-variable "$($Check)default"  -ValueOnly
        $Tests.FailedCount | Should -Be 0 -Because "We expect all of the checks to run and pass with default setting (Yes we may set some values before but you get my drift)"
    }
}
function Invoke-ConfigCheck {
    It "All Checks should fail when config changed for $Check" {
        $Tests = get-variable "$($Check)configchanged"  -ValueOnly
        $Tests.PassedCount | Should -Be 0 -Because "We expect all of the checks to run and fail when we have changed the config values"
    }
}
function Invoke-ValueCheck {
    It "All Checks should pass when setting changed for $Check" {
        $Tests = get-variable "$($Check)valuechanged"  -ValueOnly
        $Tests.FailedCount | Should -Be 0 -Because "We expect all of the checks to run and pass when we have changed the settings to match the config values"
    }
}

Now I can use those functions inside a loop in my Integration Pester Test

    $TestingTheChecks = @('errorlogscount','jobhistory')
    Foreach ($Check in $TestingTheChecks) {
        Context "$Check Checks" {
            Invoke-DefaultCheck
            Invoke-ConfigCheck
            Invoke-ValueCheck
        }
    }

Write Some Integration Tests

So for this new test I have added a value to the TestingTheChecks array then I can test my checks. The default check I can check like this

# run the checks against these instances (SQL2014 agent wont start :-( ))
$null = Set-DbcConfig -Name app.sqlinstance $containers.Where{$_ -ne 'localhost,15588'}
# by default all tests should pass on default instance settings
$jobhistorydefault = Invoke-DbcCheck -SqlCredential $cred -Check JobHistory -Show None  -PassThru

Now I need to change the configurations so that they do not match the defaults and run the checks again

#Change the configuration to test that the checks fail
$null = Set-DbcConfig -Name agent.history.maximumjobhistoryrows -value 1000
$null = Set-DbcConfig -Name agent.history.maximumhistoryrows -value 10000
$jobhistoryconfigchanged = Invoke-DbcCheck -SqlCredential $cred -Check JobHistory -Show None  -PassThru

Next we have to change the instance settings so that they match the dbachecks configuration and run the checks and test that they all pass.

We will (of course) use dbatools for this. First we need to find the command that we need

Find-DbaCommand jobserver

and then work out how to use it

Get-Help Set-DbaAgentServer -Detailed

There is an example that does exactly what we want 🙂 So we can run this.

$setDbaAgentServerSplat = @{
    MaximumJobHistoryRows = 1000
    MaximumHistoryRows = 10000
    SqlInstance = $containers.Where{$_ -ne 'localhost,15588'}
    SqlCredential = $cred
}
Set-DbaAgentServer @setDbaAgentServerSplat
$jobhistoryvaluechanged = Invoke-DbcCheck -SqlCredential $cred -Check JobHistory -Show None  -PassThru

Run the Integration Tests

And then we will check that all of the checks are passing and failing as expected

Invoke-Pester .\DockerTests.ps1

Integration Test For Error Log Counts

There is another integration test there for the error logs count. This works in the same way. Here is the code


#region error Log Count - PR 583
# default test
$errorlogscountdefault = Invoke-DbcCheck -SqlCredential $cred -Check ErrorLogCount -Show None  -PassThru
# set a value and then it will fail
$null = Set-DbcConfig -Name policy.errorlog.logcount -Value 10
$errorlogscountconfigchanged = Invoke-DbcCheck -SqlCredential $cred -Check ErrorLogCount -Show None  -PassThru

# set the value and then it will pass
$null = Set-DbaErrorLogConfig -SqlInstance $containers -SqlCredential $cred -LogCount 10
$errorlogscountvaluechanged = Invoke-DbcCheck -SqlCredential $cred -Check ErrorLogCount -Show None  -PassThru
#endregion

Merge the Changes

So with all the tests passing I can merge the PR into the development branch and Azure DevOps will start a build. Ultimately, I would like to add the integration to the build as well following André‘s blog post but for now I used the GitHub Pull Request extension to merge the pull request into development which started a build and then merged that into master which signed the code and deployed it to the PowerShell gallery as you can see here and the result is

https://www.powershellgallery.com/packages/dbachecks/1.1.164

PASS Summit, SQLTrain and My First US SQL Saturday

Next week is the week when I used to dread looking at Twitter and especially the #PASSsummit hashtag, watching all of those folk having a great time in great sessions and doing all of the networking. Last year I was lucky enough to attend for the first time and take part in Chrissy LeMaire and CKs pre-con, this year I decided to add a US SQL Saturday to the list.

I shall be attending SQL Saturday Oregon in Portland and presenting about dbatools. I am really lucky, my amazing Dad, now that he is retired, happily plays Dad’s taxi (still!) and frequently drives me to the airport when I go away on trips. This is the first time he has really gone Wow! When I asked why, it is because Portland is where Grimm is filmed and he has watched the whole series and loved it!

I am looking forward to ticking off another thing on my list of things to do and presenting at a US SQL Saturday to add to 12 SQL Saturday cities in Europe and multiple other events around the world. If you are there come up and say hi.

I shall also be room monitoring for the PowerShell for the DBA session by Amy Herold t | b There are still volunteer slots available to help, sign up here if you are going. It is a lot of fun and an excellent way to give something back. SQL Saturdays take a lot of work to organise and organisers are always willing for some help. You will meet new people and have a lot of fun as well.

To get to Seattle I am going on the SQL Train. A whole bunch of data platform folk travelling on a train together up to Seattle for PASS Summit. It looks like it will be a lot of fun 🙂

Once in Seattle it is time for the week where others will not want to look at my twitter feed 🙂 A whole week at PASS Summit watching the sessions and networking (there may be beer involved) with our peers.

My one piece of advice is please don’t hide away in your hotel room for all of the time that sessions are not on. I know that dealing with a large amount of people can be tough and you might need some me time ( I will ) but there are a lot of activities both loud and quieter where you will have the opportunity to meet up and make new friends and contacts which may really pay back further down the line and you will have fun too.

On the Tuesday I am doing a pre-conference session Professional and Proficient PowerShell: From Writing Scripts to Developing Solutions. A whole day where I will be showing how to write PowerShell modules and all of the tips and tricks that I have learnt over the years.

Wednesday sees me embracing my inner André Kamman – unfortunately he is unable to make PASS Summit this year, so I will be delivering the session dbatools Powershell Library – The Complete Introduction in his place in room 2AB. I shall try to do it in as cool and relaxed a way as he does (It probably wont work I will get too excited 🙂 )

On Thursday I will be talking about dbachecks in room 6C which will also be streamed on PASSTv.

In between all of that, I shall be attending some networking events, visiting sessions, hanging out with people new and old and walking down the corridors, so if you see me, stop me and say hi, I’d love to meet you 🙂
(note – just before my sessions I may be too nervous to properly have a conversation)

Hopefully, I will meet you there and for the twitter folk stuck back in Europe I empathise 🙂

Checking Trace Flags with dbachecks, online docs and PSPowerHour

It’s been a few weeks since i have blogged as I have been busy with a lot of other things. One of which is preparing for my SQL Pass Summit pre-con which has lead to me improving the CI/CD for dbachecks by adding auto-creation of online documentation, which you can find at https://dbachecks.readthedocs.io or by running Get-Help with the -Online switch for any dbachecks command.
Get-Help Invoke-DbcCheck -Online
01 - online help.png
I will blog about how dbachecks uses Azure DevOps to do this another time

PSPowerHour

The PowerShell community members Michael T Lombardi and Warren Frame have created PSPowerHour. PSPowerHour is “like a virtual User Group, with a lightning-demo format, and room for non-PowerShell-specific content. Eight community members will give a demo each PowerHour.”
Chrissy blogged about the first one on the dbatools blog
You can watch the videos on the Youtube channel and keep an eye out for more online PSPowerHours via twitter or the GitHub page.
While watching the first group of sessions Andrew Wickham demonstrated using dbatools with trace flags and I thought that needs to be added to dbachecks so I created an issue. Anyone can do this to file improvements as well as bugs for members of the team to code.

Trace Flags

The previous release of dbachecks brought 2 new checks for traceflags. One for traceflags expected to be running and one for traceflags not expected to be running.
You will need to have installed dbachecks from the PowerShell Gallery to do this. This can be done using
Install-Module -Name dbachecks
Once dbachecks is installed you can find the checks using
Get-DBcCheck
you can filter using the pattern parameter
Get-DBcCheck -Pattern traceflag
02 - get0dbcconfig.png
This will show you
  • the UniqueTag which will enable you to run only that check if you wish
  • AllTags which shows which tags will include that check
  • Config will show you which configuration items can be set for this check

The trace flag checks require the app.sqlinstance configuration which is the list of SQL instances that the checks will run against. You can also specify the instances as a parameter for Invoke-DbCheck as well.

The configuration for the expected traceflags is policy.traceflags.expected By default it is set to null. You can see what configuration it has using

Get-DBcConfig policy.traceflags.expected

get-dbcconfig.png

So if you want to check that there are no trace flags running, then you can run
$instance = 'sql0'
Set-DbcConfig -Name app.sqlinstance -Value $instance
Invoke-DbcCheck -Check TraceFlagsExpected
check 1.png
Maybe this instance is required to have trace flag 1117 enabled so that all files in a file group grow equally, you can set the trace flag you expect to be running using
Set-DbcConfig -Name policy.traceflags.expected -Value 1117
set config.png
Now you when you run the check it fails
Invoke-DbcCheck -Check TraceFlagsExpecte
not found.png
and gives you the error message

 [-] Expected Trace Flags 1117 exist on sql0 593ms
Expected 1117 to be found in collection @(), because We expect that Trace Flag 1117 will be set on sql0, but it was not found.

So we have a failing test. We need to fix that. We can use dbatools
Enable-DbaTraceFlag -SqlInstance $instance -TraceFlag 1117
set traceflag.png
This time when we run the check
Invoke-DbcCheck -Check TraceFlagsExpected
it passes
passed test
If you just need to see what trace flags are enabled you can use
Get-DbaTraceFlag -SqlInstance $instance
get trace flag.png
Reset the configuration for the expected trace flag to an empty array and then set the configuration for traceflags we do not expect to be running to 1117
Set-DbcConfig -Name policy.traceflags.expected -Value @()
Set-DbcConfig -Name policy.traceflags.notexpected -Value 1117
set config 2.png
and then run the trace flags not expected to be running check with
Invoke-DbcCheck -Check TraceFlagsNotExpected
It will fail as 1117 is still running
not expected fail.png
and give the message
[-] Expected Trace Flags 1117 to not exist on sql0 321ms
Expected 1117 to not be found in collection 1117, because We expect that Trace Flag 1117 will not be set on sql0, but it was found.
So to resolve this failing check we need to disable the trace flag and we can do that with dbatools using
Disable-DbaTraceFlag -SqlInstance $instance -TraceFlag 1117
disable trace flag
and now when we run the check
Invoke-DbcCheck -Check TraceFlagsNotExpected
it passes
passed bnot expected.png
The checks also work with multiple traceflags so you can set multiple values for trace flags that are not expexted to be running
Set-DbcConfig -Name policy.traceflags.notexpected -Value 1117, 1118
and as we saw earlier, you can run both trace flag checks using
Invoke-DbcCheck -Check TraceFlag
multi checks.png
You can use this or any of the 95 available checks to validate that your SQL instances, singular or your whole estate are as you expect them to be.

Using the PowerShell AST to find a ForEach Method

In dbachecks we enable people to see what checks are available by running Get-DbcCheck. This gives a number of properties including the ‘type’ of check. This refers to the configuration item or parameter that is required to have a value for this check to run.

For example – Any check to do with SQL Agent is of type Sqlinstance because it requires an instance to be specified but a check for SPN is of type ComputerName because it requires a computer name to run.

Automation for the win

Because I believe in automation I do not want to have to hard code these values anywhere but create them when the module is imported so we use a json file to feed Get-DbcCheck and populate the Json file when we import the module. This is done using the method that I described here and means that whenever a new check is added it is automatically available in Get-DbcCheck without any extra work.

We use code like this
## Parse the file with AST
$CheckFileAST = [Management.Automation.Language.Parser]::ParseInput($check, [ref]$null, [ref]$null)
## Old code we can use the describes
$Describes = $CheckFileAST.FindAll([Func[Management.Automation.Language.Ast, bool]] {
        param ($ast)
        $ast.CommandElements -and
        $ast.CommandElements[0].Value -eq 'describe'
    }, $true)

@($describes).ForEach{
    $groups += $filename
    $Describe = $_.CommandElements.Where{$PSItem.StaticType.name -eq 'string'}[1]
    $title = $Describe.Value
    $Tags = $PSItem.CommandElements.Where{$PSItem.StaticType.name -eq 'Object[]' -and $psitem.Value -eq $null}.Extent.Text.ToString().Replace(', $filename', '')
    # CHoose the type
    if ($Describe.Parent -match "Get-Instance") {
        $type = "Sqlinstance"
    }
    elseif ($Describe.Parent -match "Get-ComputerName" -or $Describe.Parent -match "AllServerInfo") {
        $type = "ComputerName"
    }
    elseif ($Describe.Parent -match "Get-ClusterObject") {
        $Type = "ClusteNode"
    }
First we parse the code with the AST and store that in the CheckFileAST variable, then we use the FindAll method to find any command elements that match “Describe” which conveniently gets our describes and then we can simply match the Parent object which holds some code to each function that we use to get our values to be passed to the tests Get-ComputerName, Get-Instance, Get-ClusterObject and set the type appropriately.
which when run against a check like this
Describe "Backup Path Access" -Tags BackupPathAccess, Storage, DISA, $filename {
    @(Get-Instance).ForEach{
        if ($NotContactable -contains $psitem) {
            Context "Testing Backup Path Access on $psitem" {
                It "Can't Connect to $Psitem" {
                    $false| Should -BeTrue -Because "The instance should be available to be connected to!"
                }
            }
        }
        else {
            Context "Testing Backup Path Access on $psitem" {
                $backuppath = Get-DbcConfigValue policy.storage.backuppath
                if (-not$backuppath) {
                    $backuppath = (Get-DbaDefaultPath-SqlInstance $psitem).Backup
                }
                It "can access backup path ($backuppath) on $psitem" {
                    Test-DbaSqlPath-SqlInstance $psitem -Path $backuppath| Should -BeTrue -Because 'The SQL Service account needs to have access to the backup path to backup your databases'
                }
            }
        }
    }
}
will find the describe block and get the title “Backup Path Access”  and the tags BackupPathAccess, Storage, DISA, $filename and then find the Get-Instance and set the type to SqlInstance

Until Rob breaks it!

This has worked wonderfully well for 6 months or so of the life of dbachecks but this week I broke it!
The problem was the performance of the code. It is taking a long time to run the tests and I am looking at ways to improve this. I was looking at the Server.Tests file because I thought why not start with one of the smaller files.
It runs the following checks
Server Power Plan Configuration
SPNs
Disk Space
Ping Computer
CPUPrioritisation
Disk Allocation Unit
Instance Connection
and it was looping through the computer names for each check like this
Describe "Server Power Plan Configuration" -Tags PowerPlan, $filename {
    @(Get-ComputerName).ForEach{
    }
}
Describe "Instance Connection" -Tags InstanceConnection, Connectivity, $filename {
    @(Get-Instance).ForEach{
    }
}
Describe "SPNs" -Tags SPN, $filename {
    @(Get-ComputerName).ForEach{
    }
}
Describe "Disk Space" -Tags DiskCapacity, Storage, DISA, $filename {
    @(Get-ComputerName).ForEach{
    }
}
Describe "Ping Computer" -Tags PingComputer, $filename {
    @(Get-ComputerName).ForEach{
    }
}
Describe "CPUPrioritisation" -Tags CPUPrioritisation, $filename {
    @(Get-ComputerName).ForEach{
    }
}
Describe "Disk Allocation Unit" -Tags DiskAllocationUnit, $filename {
    @(Get-ComputerName).ForEach{
    }
}
I altered it to have only one loop for the computer names like so
@(Get-ComputerName).ForEach{
    Describe "Server Power Plan Configuration" -Tags PowerPlan, $filename {
    }
    Describe "SPNs" -Tags SPN, $filename {
    }
    Describe "Disk Space" -Tags DiskCapacity, Storage, DISA, $filename {
    }
    Describe "Ping Computer" -Tags PingComputer, $filename {
    }
    Describe "CPUPrioritisation" -Tags CPUPrioritisation, $filename {
    }
    Describe "Disk Allocation Unit" -Tags DiskAllocationUnit, $filename {
    }
}
Describe "Instance Connection" -Tags InstanceConnection, Connectivity, $filename {
    @(Get-Instance).ForEach{
    }
}
and immediately in testing my checks for the Server Tag decreased in time by about 60% 🙂
I was very happy.
Then I added it to the dbachecks module on my machine, loaded the module and realised that my Json file for Get-DbcCheck was no longer being populated for the type because this line
elseif ($Describe.Parent-match"Get-ComputerName"-or$Describe.Parent-match"AllServerInfo")
was no longer true.

AST for other things

So I googled Management.Automation.Language.Ast the first result lead me to docs.microsoft There are a number of different language elements available there and I found InvokeMemberExpressionAst which will let me find any methods that have been invoked, so now I can find the loops with
$ComputerNameForEach = $CheckFileAST.FindAll([Func[Management.Automation.Language.Ast, bool]] {
        param ($ast)
        $ast -is [System.Management.Automation.Language.InvokeMemberExpressionAst]
    }, $true)
When I examined the object returned I could see that I could further limit the result to get only the method for Get-ComputerName and then if I choose the Extent I can get the code of that loop
## New code uses a Computer Name loop to speed up execution so need to find that as well
$ComputerNameForEach=$CheckFileAST.FindAll([Func[Management.Automation.Language.Ast,bool]] {
param ($ast)
$ast-is [System.Management.Automation.Language.InvokeMemberExpressionAst] -and$ast.expression.Subexpression.Extent.Text-eq'Get-ComputerName'
}, $true).Extent

and now I can match the Tags to the type again :-)

if ($ComputerNameForEach-match$title) {
$type="ComputerName"
}
and now Get-DbcCheck is returning the right results and the checks are a little faster
You can find dbachecks on the PowerShell Gallery or install it using
Install-Module dbachecks -Scope CurrentUser

 

dbachecks – Dark Mode Historical Validation PowerBi

in my last post I showed how you can save the results of dbachecks to a database and created a PowerBi report. Inspired by Frank Henninger in the #dbachecks slack channel and Shawn Melton who explained the difficulties with red/green colour blind I then created this one 🙂

dark mode

You can find it in my GitHub and have a play with it below

Happy Validating!

dbachecks – Save the results to a database for historical reporting

I gave a presentation at SQL Day in Poland last week on dbachecks and one of the questions I got asked was will you write a command to put the results of the checks into a database for historical reporting.

The answer is no and here is the reasoning. The capability is already there. Most good PowerShell commands will only return an object and the beauty of an object is that you can do anything you like with it. Your only limit is your imagination 🙂 I have written about this before here. The other reason is that it would be very difficult to write something that was easily configurable for the different requirements that people will require. But here is one way of doing it.

Create a configuration and save it

Let’s define a configuration and call it production. This is something that I do all of the time so that I can easily run a set of checks with the configuration that I want.

# The computername we will be testing
Set-DbcConfig -Name app.computername -Value $sql0,$SQl1
# The Instances we want to test
Set-DbcConfig -Name app.sqlinstance -Value $sql0,$SQl1
# The database owner we expect
Set-DbcConfig -Name policy.validdbowner.name -Value 'THEBEARD\EnterpriseAdmin'
# the database owner we do NOT expect
Set-DbcConfig -Name policy.invaliddbowner.name -Value 'sa'
# Should backups be compressed by default?
Set-DbcConfig -Name policy.backup.defaultbackupcompression -Value $true
# Do we allow DAC connections?
Set-DbcConfig -Name policy.dacallowed -Value $true
# What recovery model should we have?
Set-DbcConfig -Name policy.recoverymodel.type -value FULL
# What should ourt database growth type be?
Set-DbcConfig -Name policy.database.filegrowthtype -Value kb
# What authentication scheme are we expecting?
Set-DbcConfig -Name policy.connection.authscheme -Value 'KERBEROS'
# Which Agent Operator should be defined?
Set-DbcConfig -Name agent.dbaoperatorname -Value 'The DBA Team'
# Which Agent Operator email should be defined?
Set-DbcConfig -Name agent.dbaoperatoremail -Value 'TheDBATeam@TheBeard.Local'
# Which failsafe operator shoudl be defined?
Set-DbcConfig -Name agent.failsafeoperator -Value 'The DBA Team'
## Set the database mail profile name
Set-DbcConfig -Name agent.databasemailprofile -Value 'DbaTeam'
# Where is the whoisactive stored procedure?
Set-DbcConfig -Name policy.whoisactive.database -Value master
# What is the maximum time since I took a Full backup?
Set-DbcConfig -Name policy.backup.fullmaxdays -Value 7
# What is the maximum time since I took a DIFF backup (in hours) ?
Set-DbcConfig -Name policy.backup.diffmaxhours -Value 26
# What is the maximum time since I took a log backup (in minutes)?
Set-DbcConfig -Name policy.backup.logmaxminutes -Value 30
# What is my domain name?
Set-DbcConfig -Name domain.name -Value 'TheBeard.Local'
# Where is my Ola database?
Set-DbcConfig -Name policy.ola.database -Value master
# Which database should not be checked for recovery model
Set-DbcConfig -Name policy.recoverymodel.excludedb -Value 'master','msdb','tempdb'
# Should I skip the check for temp files on c?
Set-DbcConfig -Name skip.tempdbfilesonc -Value $true
# Should I skip the check for temp files count?
Set-DbcConfig -Name skip.tempdbfilecount -Value $true
# Which Checks should be excluded?
Set-DbcConfig -Name command.invokedbccheck.excludecheck -Value LogShipping,ExtendedEvent, PseudoSimple,SPN, TestLastBackupVerifyOnly,IdentityUsage,SaRenamed
# How many months before a build is unsupported do I want to fail the test?
Set-DbcConfig -Name policy.build.warningwindow -Value 6
## I need to set the app.cluster configuration to one of the nodes for the HADR check
## and I need to set the domain.name value
Set-DbcConfig -Name app.cluster -Value $SQL0
Set-DbcConfig -Name domain.name -Value 'TheBeard.Local'
## I also skip the ping check for the listener as we are in Azure
Set-DbcConfig -Name skip.hadr.listener.pingcheck -Value $true
Now I can export that configuration to a json file and store on a file share or in source control using the code below. This makes it easy to embed the checks into an automation solution
Export-DbcConfig -Path Git:\Production.Json
and then I can use it with
Import-DbcConfig -Path Git:\Production.Json
Invoke-DbcCheck
01 - Invoke-DbcCheck
I would use one of the Show parameter values here if I was running it at the command line, probably fails to make reading the information easier

Add results to a database

This only gets us the test results on the screen, so if we want to save them to a database we have to use the PassThru parameter for Invoke-DbcCheck. I will run the checks again, save them to a variable
$Testresults = Invoke-DbcCheck -PassThru -Show Fails

Then I can use the dbatools Write-DbaDatatable command to write the results to a table in a database. I need to do this twice, once for the summary and once for the test results

$Testresults | Write-DbaDataTable -SqlInstance $sql0 -Database tempdb -Table Prod_dbachecks_summary -AutoCreateTable
$Testresults.TestResult | Write-DbaDataTable -SqlInstance $sql0 -Database tempdb -Table Prod_dbachecks_detail -AutoCreateTable

and I get two tables one for the summary

02 - summary

and one for the details

03 - detail
This works absolutely fine and I could continue to add test results in this fashion but it has no date property so it is not so useful for reporting.

Create tables and triggers

This is one way of doing it. I am not sure it is the best way but it works! I always look forward to how people take ideas and move them forward so if you have a better/different solution please blog about it and reference it in the comments below

First I created a staging table for the summary results

CREATE TABLE [dbachecks].[Prod_dbachecks_summary_stage](
	[TagFilter] [nvarchar](max) NULL,
	[ExcludeTagFilter] [nvarchar](max) NULL,
	[TestNameFilter] [nvarchar](max) NULL,
	[TotalCount] [int] NULL,
	[PassedCount] [int] NULL,
	[FailedCount] [int] NULL,
	[SkippedCount] [int] NULL,
	[PendingCount] [int] NULL,
	[InconclusiveCount] [int] NULL,
	[Time] [bigint] NULL,
	[TestResult] [nvarchar](max) NULL
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO

and a destination table with a primary key and a date column which defaults to todays date

CREATE TABLE [dbachecks].[Prod_dbachecks_summary](
	[SummaryID] [int] IDENTITY(1,1) NOT NULL,
	[TestDate] [date] NOT NULL,
	[TagFilter] [nvarchar](max) NULL,
	[ExcludeTagFilter] [nvarchar](max) NULL,
	[TestNameFilter] [nvarchar](max) NULL,
	[TotalCount] [int] NULL,
	[PassedCount] [int] NULL,
	[FailedCount] [int] NULL,
	[SkippedCount] [int] NULL,
	[PendingCount] [int] NULL,
	[InconclusiveCount] [int] NULL,
	[Time] [bigint] NULL,
	[TestResult] [nvarchar](max) NULL,
 CONSTRAINT [PK_Prod_dbachecks_summary] PRIMARY KEY CLUSTERED 
(
	[SummaryID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO

ALTER TABLE [dbachecks].[Prod_dbachecks_summary] ADD  CONSTRAINT [DF_Prod_dbachecks_summary_TestDate]  DEFAULT (getdate()) FOR [TestDate]
GO

and added an INSERT trigger to the staging table

CREATE TRIGGER [dbachecks].[Load_Prod_Summary] 
   ON   [dbachecks].[Prod_dbachecks_summary_stage]
   AFTER INSERT
AS 
BEGIN
	-- SET NOCOUNT ON added to prevent extra result sets from
	-- interfering with SELECT statements.
	SET NOCOUNT ON;

    INSERT INTO [dbachecks].[Prod_dbachecks_summary] 
	([TagFilter], [ExcludeTagFilter], [TestNameFilter], [TotalCount], [PassedCount], [FailedCount], [SkippedCount], [PendingCount], [InconclusiveCount], [Time], [TestResult])
	SELECT [TagFilter], [ExcludeTagFilter], [TestNameFilter], [TotalCount], [PassedCount], [FailedCount], [SkippedCount], [PendingCount], [InconclusiveCount], [Time], [TestResult] FROM [dbachecks].[Prod_dbachecks_summary_stage]

END
GO

ALTER TABLE [dbachecks].[Prod_dbachecks_summary_stage] ENABLE TRIGGER [Load_Prod_Summary]
GO
and for the details I do the same thing. A details table
CREATE TABLE [dbachecks].[Prod_dbachecks_detail](
	[DetailID] [int] IDENTITY(1,1) NOT NULL,
	[SummaryID] [int] NOT NULL,
	[ErrorRecord] [nvarchar](max) NULL,
	[ParameterizedSuiteName] [nvarchar](max) NULL,
	[Describe] [nvarchar](max) NULL,
	[Parameters] [nvarchar](max) NULL,
	[Passed] [bit] NULL,
	[Show] [nvarchar](max) NULL,
	[FailureMessage] [nvarchar](max) NULL,
	[Time] [bigint] NULL,
	[Name] [nvarchar](max) NULL,
	[Result] [nvarchar](max) NULL,
	[Context] [nvarchar](max) NULL,
	[StackTrace] [nvarchar](max) NULL,
 CONSTRAINT [PK_Prod_dbachecks_detail] PRIMARY KEY CLUSTERED 
(
	[DetailID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO

ALTER TABLE [dbachecks].[Prod_dbachecks_detail]  WITH CHECK ADD  CONSTRAINT [FK_Prod_dbachecks_detail_Prod_dbachecks_summary] FOREIGN KEY([SummaryID])
REFERENCES [dbachecks].[Prod_dbachecks_summary] ([SummaryID])
GO

ALTER TABLE [dbachecks].[Prod_dbachecks_detail] CHECK CONSTRAINT [FK_Prod_dbachecks_detail_Prod_dbachecks_summary]
GO

A stage table

CREATE TABLE [dbachecks].[Prod_dbachecks_detail_stage](
	[ErrorRecord] [nvarchar](max) NULL,
	[ParameterizedSuiteName] [nvarchar](max) NULL,
	[Describe] [nvarchar](max) NULL,
	[Parameters] [nvarchar](max) NULL,
	[Passed] [bit] NULL,
	[Show] [nvarchar](max) NULL,
	[FailureMessage] [nvarchar](max) NULL,
	[Time] [bigint] NULL,
	[Name] [nvarchar](max) NULL,
	[Result] [nvarchar](max) NULL,
	[Context] [nvarchar](max) NULL,
	[StackTrace] [nvarchar](max) NULL
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO

with a trigger

CREATE TRIGGER [dbachecks].[Load_Prod_Detail] 
   ON   [dbachecks].[Prod_dbachecks_detail_stage]
   AFTER INSERT
AS 
BEGIN
	-- SET NOCOUNT ON added to prevent extra result sets from
	-- interfering with SELECT statements.
	SET NOCOUNT ON;

    INSERT INTO [dbachecks].[Prod_dbachecks_detail] 
([SummaryID],[ErrorRecord], [ParameterizedSuiteName], [Describe], [Parameters], [Passed], [Show], [FailureMessage], [Time], [Name], [Result], [Context], [StackTrace])
	SELECT 
	(SELECT MAX(SummaryID) From [dbachecks].[Prod_dbachecks_summary]),[ErrorRecord], [ParameterizedSuiteName], [Describe], [Parameters], [Passed], [Show], [FailureMessage], [Time], [Name], [Result], [Context], [StackTrace]
	FROM [dbachecks].[Prod_dbachecks_detail_stage]

END
GO

ALTER TABLE [dbachecks].[Prod_dbachecks_detail_stage] ENABLE TRIGGER [Load_Prod_Detail]
GO

Then I can use Write-DbaDatatable with a couple of extra parameters, FireTriggers to run the trigger, Truncate and Confirm:$false to avoid any confirmation because I want this to run without any interaction and I can get the results into the database.

$Testresults | Write-DbaDataTable -SqlInstance $Instance -Database $Database -Schema dbachecks -Table Prod_dbachecks_summary_stage -FireTriggers -Truncate -Confirm:$False
$Testresults.TestResult | Write-DbaDataTable -SqlInstance $Instance -Database $Database -Schema dbachecks -Table Prod_dbachecks_detail_stage -FireTriggers -Truncate -Confirm:$False
detail with stage

Which means that I can now query some of this data and also create PowerBi reports for it.

To enable me to have results for the groups in dbachecks I have to do a little bit of extra manipulation. I can add all of the checks to the database using

Get-DbcCheck | Write-DbaDataTable -SqlInstance $sql0 -Database ValidationResults -Schema dbachecks -Table Checks -Truncate -Confirm:$False -AutoCreateTable

But because the Ola Hallengren Job names are configuration items I need to update the values for those checks which I can do as follows

$query = "
UPDATE [dbachecks].[Checks] SET [Describe] = 'Ola - " + (Get-DbcConfigValue -Name ola.jobname.systemfull) + "' WHERE [Describe] = 'Ola - `$SysFullJobName'
UPDATE [dbachecks].[Checks] SET [Describe] = 'Ola - " + (Get-DbcConfigValue -Name ola.jobname.UserFull) + "' WHERE [Describe] = 'Ola - `$UserFullJobName'
UPDATE [dbachecks].[Checks] SET [Describe] = 'Ola - " + (Get-DbcConfigValue -Name ola.jobname.UserDiff) + "' WHERE [Describe] = 'Ola - `$UserDiffJobName'
UPDATE [dbachecks].[Checks] SET [Describe] = 'Ola - " + (Get-DbcConfigValue -Name ola.jobname.UserLog) + "' WHERE [Describe] = 'Ola - `$UserLogJobName'
UPDATE [dbachecks].[Checks] SET [Describe] = 'Ola - " + (Get-DbcConfigValue -Name ola.jobname.CommandLogCleanup) + "' WHERE [Describe] = 'Ola - `$CommandLogJobName'
UPDATE [dbachecks].[Checks] SET [Describe] = 'Ola - " + (Get-DbcConfigValue -Name ola.jobname.SystemIntegrity) + "' WHERE [Describe] = 'Ola - `$SysIntegrityJobName'
UPDATE [dbachecks].[Checks] SET [Describe] = 'Ola - " + (Get-DbcConfigValue -Name ola.jobname.UserIntegrity) + "' WHERE [Describe] = 'Ola - `$UserIntegrityJobName'
UPDATE [dbachecks].[Checks] SET [Describe] = 'Ola - " + (Get-DbcConfigValue -Name ola.jobname.UserIndex) + "' WHERE [Describe] = 'Ola - `$UserIndexJobName'
UPDATE [dbachecks].[Checks] SET [Describe] = 'Ola - " + (Get-DbcConfigValue -Name ola.jobname.OutputFileCleanup) + "' WHERE [Describe] = 'Ola - `$OutputFileJobName'
UPDATE [dbachecks].[Checks] SET [Describe] = 'Ola - " + (Get-DbcConfigValue -Name ola.jobname.DeleteBackupHistory) + "' WHERE [Describe] = 'Ola - `$DeleteBackupJobName'
UPDATE [dbachecks].[Checks] SET [Describe] = 'Ola - " + (Get-DbcConfigValue -Name ola.jobname.PurgeBackupHistory) + "' WHERE [Describe] = 'Ola - `$PurgeBackupJobName'
"
Invoke-DbaSqlQuery -SqlInstance $SQL0 -Database ValidationResults -Query $query

You can get a sample Power Bi report in my Github which also has the code from this blog post

Then you just need to open in PowerBi Desktop and

Click Edit Queries
Click Data Source Settings
Click Change Source
Change the Instance and Database names

09 - PowerBi

Then have an interactive report like this. Feel free to click around and see how it works. Use the arrows at the bottom right to go full-screen. NOTE – it filters by “today” so if I haven’t run the check and the import then click on one of the groups under “Today’s Checks by Group”

This enables me to filter the results and see what has happened in the past so I can filter by one instance
05 - filter by instance
or I can filter by a group of tests
07 - filter by instance
or even by a group of tests for an instance
08 - filter by instance and insance

Hopefully, this will give you some ideas of what you can do with your dbachecks results. You can find all of the code and the PowerBi in my GitHub

Happy Validating!