Good Bye Import-CliXML – Use the Secrets Management module for your labs and demos

Don’t want to read all this? There are two dotnet interactive notebooks here with the relevant information for you to use.

https://beard.media/dotnetnotebooks

Jaap is awesome

I have to start here. For the longest time, whenever anyone has asked me how I store my credentials for use in my demos and labs I have always referred them to Jaap Brassers t blog post

https://www.jaapbrasser.com/quickly-and-securely-storing-your-credentials-powershell/

Joel is also awesome!

When people wanted a method of storing credentials that didnt involve files on disk I would suggest Joel Bennett’s t module BetterCredentials which uses the Windows Credential Manager

https://www.powershellgallery.com/packages/BetterCredentials/4.5

Microsoft? Also awesome!

In February, Microsoft released the SecretManagement module for preview.

https://devblogs.microsoft.com/powershell/secrets-management-development-release/

Sydney t gave a presentation at the European PowerShell Conference which you can watch on Youtube.

Good Bye Import-CliXML

So now I say, it is time to stop using Import-Clixml for storing secrets and use the Microsoft.PowerShell.SecretsManagement module instead for storing your secrets.

Notebooks are as good as blog posts

I love notebooks and to show some people who had asked about storing secrets, I have created some. So, because I am efficient lazy I have embedded them here for you to see. You can find them in my Jupyter Notebook repository

https://beard.media/dotnetnotebooks

in the Secrets folder

Installing and using the Secrets Management Module

These notebooks may not display on a mobile device unfortunately

Using the Secret Management Module in your scripts

Here is a simple example of using the module to provide the credential for a docker container and then to dbatools to query the container

These notebooks may not display on a mobile device unfortunately

How to break a SQL 2019 container on my laptop

Just a very quick post today. At the weekend I blogged about creating SQL 2019 containers with named volumes enabling you to persist your data and yesterday about creating a random workload using PowerShell and a big T-SQL script.

The interesting thing about creating workload is that you can break things 🙂

When I created a SQL 2019 container with the data files mapped to a directory on my laptops C Drive with a docker-compose like this

version: '3.7'

services:
    2019-CTP23:
        image: mcr.microsoft.com/mssql/server:2019-CTP2.3-ubuntu
        ports:  
          - "15591:1433"
          - "5022:5022"
        environment:
          SA_PASSWORD: "Password0!"
          ACCEPT_EULA: "Y"
        volumes: 
          - C:\MSSQL\BACKUP\KEEP:/var/opt/mssql/backups
          - C:\MSSQL\DockerFiles\datafiles:/var/opt/sqlserver
          - C:\MSSQL\DockerFiles\system:/var/opt/mssql

restore the AdventureWorks database to use the /var/opt/sqlserver directory and run a workload after a while the container stops and when you examine the logs you find

I had a whole load of these errors

2019-04-02 20:48:24.73 spid58      Error: 17053, Severity: 16, State: 1.
2019-04-02 20:48:24.73 spid58      FCB::MakePreviousWritesDurable: Operating system error (null) encountered.
2019-04-02 20:48:24.74 spid58      Error: 9001, Severity: 21, State: 1.
2019-04-02 20:48:24.74 spid58      The log for database 'AdventureWorks2014' is not available. Check the operating system error log for related error messages. Resolve any errors and restart the database.
2019-04-02 20:48:25.05 spid58      Error: 9001, Severity: 21, State: 16.
2019-04-02 20:48:25.05 spid58      The log for database 'AdventureWorks2014' is not available. Check the operating system error log for related error messages. Resolve any errors and restart the database.
2019-04-02 20:48:25.06 spid52      Error: 9001, Severity: 21, State: 16.
2019-04-02 20:48:25.06 spid52      The log for database 'AdventureWorks2014' is not available. Check the operating system error log for related error messages. Resolve any errors and restart the database.

Then some of these

019-04-02 20:55:16.26 spid53      Error: 17053, Severity: 16, State: 1.
2019-04-02 20:55:16.26 spid53      /var/opt/sqlserver/AdventureWorks2014_Data.mdf: Operating system error 31(A device attached to the system is not functioning.) encountered.

Then it went really bad

2019-04-02 20:55:16.35 spid53      Error: 3314, Severity: 21, State: 3.
2019-04-02 20:55:16.35 spid53      During undoing of a logged operation in database 'AdventureWorks2014' (page (0:0) if any), an error occurred at log record ID (65:6696:25). Typically, the specific failure is logged previously as an error in the operating system error log. Restore the database or file from
a backup, or repair the database.
2019-04-02 20:55:16.37 spid53      Database AdventureWorks2014 was shutdown due to error 3314 in routine 'XdesRMReadWrite::RollbackToLsn'. Restart for non-snapshot databases will be attempted after all connections to the database are aborted.
Restart packet created for dbid 5.
2019-04-02 20:55:16.41 spid53      Error during rollback. shutting down database (location: 1).
after that it tried to restart the database
2019-04-02 20:55:16.44 spid53      Error: 3314, Severity: 21, State: 3.
2019-04-02 20:55:16.44 spid53      During undoing of a logged operation in database 'AdventureWorks2014' (page (0:0) if any), an error occurred at log record ID (65:6696:25). Typically, the specific failure is logged previously as an error in the operating system error log. Restore the database or file from
a backup, or repair the database.
2019-04-02 20:55:16.49 spid53      Error: 3314, Severity: 21, State: 5.
2019-04-02 20:55:16.49 spid53      During undoing of a logged operation in database 'AdventureWorks2014' (page (0:0) if any), an error occurred at log record ID (65:6696:1). Typically, the specific failure
is logged previously as an error in the operating system error log. Restore the database or file from a backup, or repair the database.
Restart packet processing for dbid 5.
2019-04-02 20:55:17.04 spid52      [5]. Feature Status: PVS: 0. CTR: 0. ConcurrentPFSUpdate: 0.
2019-04-02 20:55:17.06 spid52      Starting up database 'AdventureWorks2014'.

But that caused

2019-04-02 20:55:17.90 spid76      Error: 9001, Severity: 21, State: 16.
2019-04-02 20:55:17.90 spid76      The log for database 'master' is not available. Check the operating
system error log for related error messages. Resolve any errors and restart the database.

Master eh? Now what will you do?

2019-04-02 20:55:25.55 spid52      29 transactions rolled forward in database 'AdventureWorks2014' (5:0). This is an informational message only. No user action is required.
2019-04-02 20:55:25.90 spid52      1 transactions rolled back in database 'AdventureWorks2014' (5:0). This is an informational message only. No user action is required.
2019-04-02 20:55:25.90 spid52      Recovery is writing a checkpoint in database 'AdventureWorks2014' (5). This is an informational message only. No user action is required.
2019-04-02 20:55:26.16 spid52      Recovery completed for database AdventureWorks2014 (database ID 5) in 7 second(s) (analysis 424 ms, redo 5305 ms, undo 284 ms.) This is an informational message only. No user action is required.
2019-04-02 20:55:26.21 spid52      Parallel redo is shutdown for database 'AdventureWorks2014' with worker pool size [1].
2019-04-02 20:55:26.27 spid52      CHECKDB for database 'AdventureWorks2014' finished without errors on 2018-03-24 00:38:39.313 (local time). This is an informational message only; no user action is required.

Interesting, then back to this.

2019-04-02 21:00:00.57 spid51      Error: 17053, Severity: 16, State: 1.
2019-04-02 21:00:00.57 spid51      FCB::MakePreviousWritesDurable: Operating system error (null) encountered.
2019-04-02 21:00:00.62 spid51      Error: 9001, Severity: 21, State: 1.
2019-04-02 21:00:00.62 spid51      The log for database 'AdventureWorks2014' is not available. Check the operating system error log for related error messages. Resolve any errors and restart the database.
2019-04-02 21:00:00.64 spid51      Error: 9001, Severity: 21, State: 16.

It did all that again before

This program has encountered a fatal error and cannot continue running at Tue Apr  2 21:04:08 2019
The following diagnostic information is available:

       Reason: 0x00000004
      Message: RETAIL ASSERT: Expression=(false) File=Thread.cpp Line=4643 Description=Timed out waiting for thread terminate/suspend/resume.
   Stacktrace: 000000006af30187 000000006af2836a 000000006ae4a4d1
               000000006ae48c55 000000006af6ab5e 000000006af6ac04
               00000002809528df
      Process: 7 - sqlservr
       Thread: 129 (application thread 0x1e8)
  Instance Id: 215cfcc9-8f69-4869-9a52-5aa44a415a83
     Crash Id: 53e98400-33f1-4786-98fd-484f0c8d9a7e
  Build stamp: 0e53295d0e1704ae5b221538dd6e2322cd46134e0cc32be49c887ca84cdb8c10
 Distribution: Ubuntu 16.04.6 LTS
   Processors: 2
 Total Memory: 4906205184 bytes
    Timestamp: Tue Apr  2 21:04:08 2019

Ubuntu 16.04.6 LTS
Capturing core dump and information to /var/opt/mssql/log...
/usr/bin/find: '/proc/7/task/516': No such file or directory
dmesg: read kernel buffer failed: Operation not permitted
No journal files were found.
No journal files were found.
Attempting to capture a dump with paldumper
WARNING: Capture attempt failure detected
Attempting to capture a filtered dump with paldumper
WARNING: Attempt to capture dump failed.  Reference /var/opt/mssql/log/core.sqlservr.7.temp/log/paldumper-debug.log for details
Attempting to capture a dump with gdb
WARNING: Unable to capture crash dump with GDB. You may need to
allow ptrace debugging, enable the CAP_SYS_PTRACE capability, or
run as root.

failing to capture it’s dump!! Oops 🙂

I had to recreate the containers without using the named volumes and then I could run my workload 🙂

Nothing particularly useful about this blog post other than an interesting look at the error log when things go wrong 🙂

Persisting databases with named volumes on Windows with docker compose

With all things containers I refer to my good friend Andrew Pruski. Known as dbafromthecold on twitter he blogs at https://dbafromthecold.com

I was reading his latest blog post Using docker named volumes to persist databases in SQL Server and decided to give it a try.

His instructions worked perfectly and I thought I would try them using a docker-compose file as I like the ease of spinning up containers with them.

I created a docker-compose file like this which will map my backup folder on my Windows 10 laptop to a directory on the container and two more folders to the system folders on the container in the same way as Andrew has in his blog.

version: '3.7'

services:
    2019-CTP23:
        image: mcr.microsoft.com/mssql/server:2019-CTP2.3-ubuntu
        ports:  
          - "15591:1433"
          - "5022:5022"
        environment:
          SA_PASSWORD: "Password0!"
          ACCEPT_EULA: "Y"
        volumes: 
          - C:\MSSQL\BACKUP\KEEP:/var/opt/mssql/backups
          - C:\MSSQL\DockerFiles\datafiles:/var/opt/sqlserver
          - C:\MSSQL\DockerFiles\system:/var/opt/mssql

and then from the directory I ran

docker-compose up -d

This will build the containers as defined in the docker-compose file. The -d runs the container in the background. This was the result.

UPDATE – 2019-03-27

I have no idea why, but today it has worked as expected using the above docker-compose file. I had tried this a couple of times, restarted docker and restarted my laptop and was consistently getting the results below – however today it has worked

So feel free to carry on reading, it’s a fun story and it shows how you can persist the databases in a new container but the above docker-compose has worked!

The command completed successfully but as you can see on the left the container is red because it is not running. (I am using the Docker Explorer extension for Visual Studio C

I inspected the logs from the container using

docker logs ctp23_2019-CTP23_1

which returned

This is an evaluation version. There are [153] days left in the evaluation period.
This program has encountered a fatal error and cannot continue running at Tue Mar 26 19:40:35 20
19
The following diagnostic information is available:
Reason: 0x00000006 Status: 0x40000015 Message: Kernel bug check Address: 0x6b643120
Parameters: 0x10861f680
Stacktrace: 000000006b72d63f 000000006b64317b 000000006b6305ca
000000006b63ee02 000000006b72b83a 000000006b72a29d
000000006b769c02 000000006b881000 000000006b894000
000000006b89c000 0000000000000001
Process: 7 – sqlservr
Thread: 11 (application thread 0x4)
Instance Id: e01b154f-7986-42c6-ae13-c7d34b8b257d
Crash Id: 8cbb1c22-a8d6-4fad-bf8f-01c6aa5389b7
Build stamp: 0e53295d0e1704ae5b221538dd6e2322cd46134e0cc32be49c887ca84cdb8c10
Distribution: Ubuntu 16.04.6 LTS
Processors: 2
Total Memory: 4906205184 bytes
Timestamp: Tue Mar 26 19:40:35 2019
Ubuntu 16.04.6 LTS
Capturing core dump and information to /var/opt/mssql/log…
dmesg: read kernel buffer failed: Operation not permitted
No journal files were found.
No journal files were found.
Attempting to capture a dump with paldumper
WARNING: Capture attempt failure detected
Attempting to capture a filtered dump with paldumper
WARNING: Attempt to capture dump failed. Reference /var/opt/mssql/log/core.sqlservr.7.temp/log/
paldumper-debug.log for details
Attempting to capture a dump with gdb
WARNING: Unable to capture crash dump with GDB. You may need to
allow ptrace debugging, enable the CAP_SYS_PTRACE capability, or
run as root.

which told me that …………. it hadn’t worked. So I removed the containers with

docker-compose down

I thought I would create the volumes ahead of time like Andrew’s blog had mentioned with

docker volume create mssqlsystem
docker volume create mssqluser

and then use the volume names in the docker-compose file mapped to the system folders in the container, this time the result was

ERROR: Named volume “mssqlsystem:/var/opt/sqlserver:rw” is used in service “2019-CTP23” but no declaration was found in the volumes section.

So that didnt work either 🙂

I decided to inspect the volume definition using

docker volume inspect mssqlsystem

I can see the mountpoint is /var/lib/docker/volumes/mssqlsystem/_data so I decided to try a docker-compose like this

version: '3.7'

services:
    2019-CTP23:
        image: mcr.microsoft.com/mssql/server:2019-CTP2.3-ubuntu
        ports:  
          - "15591:1433"
          - "5022:5022"
        environment:
          SA_PASSWORD: "Password0!"
          ACCEPT_EULA: "Y"
        volumes: 
          - C:\MSSQL\BACKUP\KEEP:/var/opt/mssql/backups
          - /var/lib/docker/volumes/mssqluser/_data:/var/opt/sqlserver
          - /var/lib/docker/volumes/mssqlsystem/_data:/var/opt/mssql

and then ran docker-compose up without the -d flag so that I could see all of the output

You can see in the output that the system database files are being moved. Thatlooks like it is working so I used CTRL + C to stop the container and return the terminal. I then ran docker-compose up -d and

I created a special database for Andrew.

I could then remove the container with

docker-compose down

To make sure there is nothing up my sleeve I altered the docker-compose file to use a different name and port but kept the volume definitions the same.

version: '3.7'

services:
    2019-CTP23-Mk1:
        image: mcr.microsoft.com/mssql/server:2019-CTP2.3-ubuntu
        ports:  
          - "15592:1433"
          - "5022:5022"
        environment:
          SA_PASSWORD: "Password0!"
          ACCEPT_EULA: "Y"
        volumes: 
          - C:\MSSQL\BACKUP\KEEP:/var/opt/mssql/backups
          - /var/lib/docker/volumes/mssqluser/_data:/var/opt/sqlserver
          - /var/lib/docker/volumes/mssqlsystem/_data:/var/opt/mssql

I ran docker-compose up -d again and connected to the new container and lo and behold the container is still there

So after doing this, I have learned that to persist the databases and to use docker-compose files I had to map the volume to the mountpoint of the docker volume. Except I haven’t, I have learned that sometimes weird things happen with Docker on my laptop!!

Using Docker to run Integration Tests for dbachecks

My wonderful friend André Kamman wrote a fantastic blog post this week SQL Server Container Instances via Cloudshell about how he uses containers in Azure to test code against different versions of SQL Server.

It reminded me that I do something very similar to test dbachecks code changes. I thought this might make a good blog post. I will talk through how I do this locally as I merge a PR from another great friend Cláudio Silva who has added agent job history checks.

GitHub PR VS Code Extension

I use the GitHub Pull Requests extension for VS Code to work with pull requests for dbachecks. This enables me to see all of the information about the Pull Request, merge it, review it, comment on it all from VS Code

I can also see which files have been changed and which changes have been made

Once I am ready to test the pull request I perform a checkout using the extension

This will update all of the files in my local repository with all of the changes in this pull request

You can see at the bottom left that the branch changes from development to the name of the PR.

Running The Unit Tests

The first thing that I do is to run the Unit Tests for the module. These will test that the code is following all of the guidelines that we require and that the tests are formatted in the correct way for the Power Bi to parse. I have blogged about this here and here and we use this Pester in our CI process in Azure DevOps which I described here.

I navigate to the root of the dbachecks repository on my local machine and run

$testresults = Invoke-Pester .\tests -ExcludeTag Integration -Show Fails -PassThru 

and after about a minute

Thank you Cláudio, the code has passed the tests 😉

Running Some Integration Tests

The difference between Unit tests and Integration tests in a nutshell is that the Unit tests are testing that the code is doing what is expected without any other external influences whilst the Integration tests are checking that the code is doing what is expected when running on an actual environment. In this scenario we know that the code is doing what is expected but we want to check what it does when it runs against a SQL Server and even when it runs against multiple SQL Servers of different versions.

Multiple Versions of SQL Server

As I have described before my friend and former colleague Andrew Pruski b | t has many resources for running SQL in containers. This means that I can quickly and easily create fresh uncontaminated instances of SQL 2012, 2014, 2016 and 2017 really quickly.

I can create 4 instances of different versions of SQL in (a tad over) 1 minute. How about you?

Imagine how long it would take to run the installers for 4 versions of SQL and the pain you would have trying to uninstall them and make sure everything is ‘clean’. Even images that have been sysprep’d won’t be done in 1 minute.

Docker Compose Up ?

So what is this magic command that has enabled me to do this? docker compose uses a YAML file to define multi-container applications. This means that with a file called docker-compose.yml like thish

version: '3.7'

services:
    sql2012:
        image: dbafromthecold/sqlserver2012dev:sp4
        ports:  
          - "15589:1433"
        environment:
          SA_PASSWORD: "Password0!"
          ACCEPT_EULA: "Y"
    sql2014:
        image: dbafromthecold/sqlserver2014dev:sp2
        ports:  
          - "15588:1433"
        environment:
          SA_PASSWORD: "Password0!"
          ACCEPT_EULA: "Y"
    sql2016:
        image: dbafromthecold/sqlserver2016dev:sp2
        ports:  
          - "15587:1433"
        environment:
          SA_PASSWORD: "Password0!"
          ACCEPT_EULA: "Y"
    sql2017:
        image: microsoft/mssql-server-windows-developer:2017-latest
        ports:  
          - "15586:1433"
        environment:
          SA_PASSWORD: "Password0!"
          ACCEPT_EULA: "Y"

and in that directory just run

docker-compose up -d

and 4 SQL containers are available to you. You can interact with them via SSMS if you wish with localhost comma PORTNUMBER. The port numbers in the above file are 15586, 15587,15588 and 15589

Now it must be noted, as I describe here that first I pulled the images to my laptop. The first time you run docker compose will take significantly longer if you haven’t pulled the images already (pulling the images will take quite a while depending on your broadband speed)

Credential

The next thing is to save a credential to make it easier to automate. I use the method described by my PowerShell friend Jaap Brasser here. I run this code

$CredentailPath = 'C:\MSSQL\BACKUP\KEEP\sacred.xml'
Get-Credential | Export-Clixml -Path $CredentialPath

and then I can create a credential object using

$cred = Import-Clixml $CredentailPath 

Check The Connections

I ensure a clean session by removing the dbatools and dbachecks modules and then import the local version of dbachecks and set some variables

$dbacheckslocalpath = 'GIT:\dbachecks\'
Remove-Module dbatools, dbachecks -ErrorAction SilentlyContinue
Import-Module $dbacheckslocalpath\dbachecks.psd1
$cred = Import-Clixml $CredentailPath 
$containers = 'localhost,15589', 'localhost,15588', 'localhost,15587', 'localhost,15586'

Now I can start to run my Integration tests. First reset the dbachecks configuration and set some configuration values

# run the checks against these instances
$null = Set-DbcConfig -Name app.sqlinstance $containers
# We are using SQL authentication
$null = Set-DbcConfig -Name policy.connection.authscheme -Value SQL
# sometimes its a bit slower than the default value
$null = Set-DbcConfig -Name policy.network.latencymaxms -Value 100 # because the containers run a bit slow!

Then I will run the dbachecks connectivity checks and save the results to a variable without showing any output

$ConnectivityTests = Invoke-DbcCheck -SqlCredential $cred -Check Connectivity -Show None -PassThru

I can then use Pester to check that dbachecks has worked as expected by testing if the failedcount property returned is 0.

Describe "Testing the checks are running as expected" -Tag Integration {
    Context "Connectivity Checks" {
        It "All Tests should pass" {
            $ConnectivityTests.FailedCount | Should -Be 0 -Because "We expect all of the checks to run and pass with default settings"
        }
    }
}

What is the Unit Test for this PR?

Next I think about what we need to be testing for the this PR. The Unit tests will help us.

Choose some Integration Tests

This check is checking the Agent job history settings and the unit tests are

  • It “Passes Check Correctly with Maximum History Rows disabled (-1)”
  • It “Fails Check Correctly with Maximum History Rows disabled (-1) but configured value is 1000”
  • It “Passes Check Correctly with Maximum History Rows being 10000”
  • It “Fails Check Correctly with Maximum History Rows being less than 10000”
  • It “Passes Check Correctly with Maximum History Rows per job being 100”
  • It “Fails Check Correctly with Maximum History Rows per job being less than 100”

So we will check the same things on real actual SQL Servers. First though we need to start the SQL Server Agent as it is not started by default. We can do this as follows

docker exec -ti integration_sql2012_1 powershell start-service SQLSERVERAGENT
docker exec -ti integration_sql2014_1 powershell start-service SQLSERVERAGENT
docker exec -ti integration_sql2016_1 powershell start-service SQLSERVERAGENT
docker exec -ti integration_sql2017_1 powershell start-service SQLSERVERAGENT

Unfortunately, the agent service wont start in the SQL 2014 container so I cant run agent integration tests for that container but it’s better than no integration tests.

This is What We Will Test

So we want to test if the check will pass with default settings. In general, dbachecks will pass for default instance, agent or database settings values by default.

We also want the check to fail if the configured value for dbachecks is set to default but the value has been set on the instance.

We want the check to pass if the configured value for the dbachecks configuration is set and the instance (agent, database) setting matches it.

If You Are Doing Something More Than Once ……

Let’s automate that. We are going to be repeatedly running those three tests for each setting that we are running integration tests for. I have created 3 functions for this again checking that FailedCount or Passed Count is 0 depending on the test.

function Invoke-DefaultCheck {
    It "All Checks should pass with default for $Check" {
        $Tests = get-variable "$($Check)default"  -ValueOnly
        $Tests.FailedCount | Should -Be 0 -Because "We expect all of the checks to run and pass with default setting (Yes we may set some values before but you get my drift)"
    }
}
function Invoke-ConfigCheck {
    It "All Checks should fail when config changed for $Check" {
        $Tests = get-variable "$($Check)configchanged"  -ValueOnly
        $Tests.PassedCount | Should -Be 0 -Because "We expect all of the checks to run and fail when we have changed the config values"
    }
}
function Invoke-ValueCheck {
    It "All Checks should pass when setting changed for $Check" {
        $Tests = get-variable "$($Check)valuechanged"  -ValueOnly
        $Tests.FailedCount | Should -Be 0 -Because "We expect all of the checks to run and pass when we have changed the settings to match the config values"
    }
}

Now I can use those functions inside a loop in my Integration Pester Test

    $TestingTheChecks = @('errorlogscount','jobhistory')
    Foreach ($Check in $TestingTheChecks) {
        Context "$Check Checks" {
            Invoke-DefaultCheck
            Invoke-ConfigCheck
            Invoke-ValueCheck
        }
    }

Write Some Integration Tests

So for this new test I have added a value to the TestingTheChecks array then I can test my checks. The default check I can check like this

# run the checks against these instances (SQL2014 agent wont start :-( ))
$null = Set-DbcConfig -Name app.sqlinstance $containers.Where{$_ -ne 'localhost,15588'}
# by default all tests should pass on default instance settings
$jobhistorydefault = Invoke-DbcCheck -SqlCredential $cred -Check JobHistory -Show None  -PassThru

Now I need to change the configurations so that they do not match the defaults and run the checks again

#Change the configuration to test that the checks fail
$null = Set-DbcConfig -Name agent.history.maximumjobhistoryrows -value 1000
$null = Set-DbcConfig -Name agent.history.maximumhistoryrows -value 10000
$jobhistoryconfigchanged = Invoke-DbcCheck -SqlCredential $cred -Check JobHistory -Show None  -PassThru

Next we have to change the instance settings so that they match the dbachecks configuration and run the checks and test that they all pass.

We will (of course) use dbatools for this. First we need to find the command that we need

Find-DbaCommand jobserver

and then work out how to use it

Get-Help Set-DbaAgentServer -Detailed

There is an example that does exactly what we want 🙂 So we can run this.

$setDbaAgentServerSplat = @{
    MaximumJobHistoryRows = 1000
    MaximumHistoryRows = 10000
    SqlInstance = $containers.Where{$_ -ne 'localhost,15588'}
    SqlCredential = $cred
}
Set-DbaAgentServer @setDbaAgentServerSplat
$jobhistoryvaluechanged = Invoke-DbcCheck -SqlCredential $cred -Check JobHistory -Show None  -PassThru

Run the Integration Tests

And then we will check that all of the checks are passing and failing as expected

Invoke-Pester .\DockerTests.ps1

Integration Test For Error Log Counts

There is another integration test there for the error logs count. This works in the same way. Here is the code


#region error Log Count - PR 583
# default test
$errorlogscountdefault = Invoke-DbcCheck -SqlCredential $cred -Check ErrorLogCount -Show None  -PassThru
# set a value and then it will fail
$null = Set-DbcConfig -Name policy.errorlog.logcount -Value 10
$errorlogscountconfigchanged = Invoke-DbcCheck -SqlCredential $cred -Check ErrorLogCount -Show None  -PassThru

# set the value and then it will pass
$null = Set-DbaErrorLogConfig -SqlInstance $containers -SqlCredential $cred -LogCount 10
$errorlogscountvaluechanged = Invoke-DbcCheck -SqlCredential $cred -Check ErrorLogCount -Show None  -PassThru
#endregion

Merge the Changes

So with all the tests passing I can merge the PR into the development branch and Azure DevOps will start a build. Ultimately, I would like to add the integration to the build as well following André‘s blog post but for now I used the GitHub Pull Request extension to merge the pull request into development which started a build and then merged that into master which signed the code and deployed it to the PowerShell gallery as you can see here and the result is

https://www.powershellgallery.com/packages/dbachecks/1.1.164

Running Windows and Linux SQL Containers together

Just for fun I decided to spend Christmas Eve getting Windows and Linux SQL containers running together.

WARNING

This is NOT a production ready solution, in fact I would not even recommend that you try it.
I definitely wouldn’t recommend it on any machine with anything useful on it that you want to use again.
We will be using a re-compiled dockerd.exe created by someone else and you know the rules about downloading things from the internet don’t you? and trusting unknown unverified people?

Maybe you can try this in an Azure VM or somewhere else safe.

Anyway, with that in mind, lets go.

Linux Containers On Windows

You can run Linux containers on Windows in Docker as follows. You need to be running the latest Docker for Windows.

Right click on the whale in the task bar and select Settings

Notice that I am running Windows Containers as there is a switch to Linux containers option. If you see Switch to Windows containers then click that first.

Click on Daemon and then tick the experimental features tick box and press apply.

Docker will restart and you can now run Linux containers alongside windows containers.

So you you can pull the Ubuntu container with

docker pull ubuntu:18.04

and then you can run it with

docker run -it --name ubuntu ubuntu:18.04

There you go one Linux container running 🙂
A good resource for learning bash for SQL Server DBAs is Kellyn Pot’Vin-Gorman b | t series on Simple Talk

Type Exit to get out of the container and to remove it

docker rm ubuntu


Running SQL Linux Containers On Windows

So can we run SQL Containers ?

Well, we can pull the image successfully.

docker pull mcr.microsoft.com/mssql/server:2019-CTP2.2-ubuntu

If you try that without the experimental features enabled you will get this error.

image operating system “linux” cannot be used on this platform

So you would think that what you can do is to use the code from Andrew ‘dbafromthecold’ Pruski’s b | t excellent container series

docker run -d -p 15789:1433 --env ACCEPT_EULA=Y --env SA_PASSWORD=Testing1122 --name testcontainer mcr.microsoft.com/mssql/server:2019-CTP2.2-ubuntu

When you do, the command will finish successfully but the container won’t be started (as can been seen by the red dot in the docker explorer).

If you look at the logs for the container. (I am lazy, I right click on the container and choose show logs in VS Code 🙂 ) you will see

sqlservr: This program requires a machine with at least 2000 megabytes of memory.
/opt/mssql/bin/sqlservr: This program requires a machine with at least 2000 megabytes of memory.

Now, if you are running Linux containers, this is an easy fix. All you have to do is to right click on the whale in the taskbar, choose Settings, Advanced and move the slider for the Memory and click apply.

But in Windows containers that option is not available.

If you go a-googling you will find that Shawn Melton created an issue for this many months ago, which gets referenced by this issue for the guest compute service, which references this PR in moby. But as this hasn’t been merged into master yet it is not available. I got bored of waiting for this and decided to look a bit deeper today.

Get It Working Just For Fun

So, you read the warning at the top?

Now let’s get it working. I take zero credit here. All of the work was done by Brian Weeteling b | G in this post

So you can follow Brians examples and check out the source code and compile it as he says or you can download the exe that he has made available (remember the warning?)

Stop Docker for Windows, and with the file downloaded and unzipped, open an admin PowerShell and navigate to the directory the dockerd.exe file is and run

.\dockerd.exe

You will get an output like this and it will keep going for a while.

Leave this window open whilst you are using Docker like this. Once you see

Then open a new PowerShell window or VS Code. You will need to run it as admin. I ran

docker ps-a

to see if it was up and available.

I also had to create a bootx64.efi file at C:\Program Files\Linux Containers which I did by copying and renaming the kernel file in that folder.

Now I can use a docker-compose file to create 5 containers. Four will be Windows containers from Andrews Docker hub repositories or Microsoft’s Docker Hub for SQL 2012, SQL 2014, SQL 2016, and SQL 2017 and one will be the latest Ubuntu SQL 2019 CTP 2.2 image. Note that you have to use version 2.4 of docker compose as the platform tag is not available yet in any later version, although it is coming to 3.7 soon.

version: '2.4'

services:
    sql2019:
        image: mcr.microsoft.com/mssql/server:2019-CTP2.2-ubuntu
        platform: linux
        ports:  
          - "15585:1433"
        environment:
          SA_PASSWORD: "Password0!"
          ACCEPT_EULA: "Y"
    sql2012:
        image: dbafromthecold/sqlserver2012dev:sp4
        platform: windows
        ports:  
          - "15589:1433"
        environment:
          SA_PASSWORD: "Password0!"
          ACCEPT_EULA: "Y"
    sql2014:
        image: dbafromthecold/sqlserver2014dev:sp2
        platform: windows
        ports:  
          - "15588:1433"
        environment:
          SA_PASSWORD: "Password0!"
          ACCEPT_EULA: "Y"
    sql2016:
        image: dbafromthecold/sqlserver2016dev:sp2
        platform: windows
        ports:  
          - "15587:1433"
        environment:
          SA_PASSWORD: "Password0!"
          ACCEPT_EULA: "Y"
    sql2017:
        image: microsoft/mssql-server-windows-developer:2017-latest
        platform: windows
        ports:  
          - "15586:1433"
        environment:
          SA_PASSWORD: "Password0!"
          ACCEPT_EULA: "Y"

Save this code as docker-compose.yml and navigate to the directory in an admin PowerShell or VS Code and run

docker-compose up -d

and now I have Windows and Linux SQL containers running together. This means that I can test some code against all versions of SQL from 2012 to 2019 easily in containers 🙂

So that is just a bit of fun.

To return to the normal Docker, simply CTRL and C the admin PowerShell you ran .\dockerd.exe in and you will see the logs showing it shutting down.

You will then be able to start Docker For Windows as usual.

I look forward to the time, hopefully early next year when all of the relevant PR’s have been merged and this is available in Docker for Windows.

Happy Automating 🙂

Creating SQL Server Containers for versions 2012-2017

I am working on my dbatools and dbachecks presentations for SQL Saturday Finland, SQLDays, SQL Saturday Cork and SQLGrillen I want to show the two modules running against a number of SQL Versions so I have installed

  • 2 Domain Controllers
  • 2 SQL 2017 instances on Windows 2016 with an Availability Group and WideWorldImporters database
  • 1 Windows 2016 jump box with all the programmes I need
  • 1 Windows 2016 with containers

using a VSTS build and this set of ARM templates and scripts

I wanted to create containers running SQL2017, SQL2016, SQL2014 and SQL2012 and restore versions of the AdventureWorks database onto each one.

Move Docker Location

I redirected my docker location from my C:\ drive to my E:\ drive so I didnt run out of space. I did this by creating a daemon.json file in C:\ProgramData\docker\config and adding

{"data-root": "E:\\containers"}
and restarting the docker service which created folders like this
01 - folders.png
Then I ran
docker volume create SQLBackups
to create a volume to hold the backups that I could mount on the containers

AdventureWorks Backups

I downloaded all the AdventureWorks backups from GitHub and copied them to E:\containers\volumes\sqlbackups\_data

Get-ChildItem $Home\Downloads\AdventureWorks* | Copy-Item -Destination E:\containers\volumes\sqlbackups\_data

Getting the Images

To download the SQL 2017 image from the DockerHub I ran

docker pull microsoft/mssql-server-windows-developer:latest

and waited for it to download and extract

I also needed the images for other versions. My good friend Andrew Pruski b | t has versions available for us to use on his Docker Hub  so it is just a case of running

docker pull dbafromthecold/sqlserver2016dev:sp1
docker pull dbafromthecold/sqlserver2014dev:sp2
docker pull dbafromthecold/sqlserver2012dev:sp4
and waiting for those to download and extract (This can take a while!)

Create the containers

Creating the containers is as easy as

docker run -d -p ExposedPort:InternalPort --name NAME -v VolumeName:LocalFolder -e sa_password=THEPASSWORD -e ACCEPT_EULA=Y IMAGENAME
so all I needed to run to create 4 SQL containers one of each version was
docker run -d -p 15789:1433 --name 2017 -v sqlbackups:C:\SQLBackups -e sa_password=PruskiIsSQLContainerMan! -e ACCEPT_EULA=Y microsoft/mssql-server-windows-developer
docker run -d -p 15788:1433 --name 2016 -v sqlbackups:C:\SQLBackups -e sa_password=PruskiIsSQLContainerMan! -e ACCEPT_EULA=Y dbafromthecold/sqlserver2016dev:sp1
docker run -d -p 15787:1433 --name 2014 -v sqlbackups:C:\SQLBackups -e sa_password=PruskiIsSQLContainerMan! -e ACCEPT_EULA=Y dbafromthecold/sqlserver2014dev:sp2
docker run -d -p 15786:1433 --name 2012 -v sqlbackups:C:\SQLBackups -e sa_password=PruskiIsSQLContainerMan! -e ACCEPT_EULA=Y dbafromthecold/sqlserver2012dev:sp4

and just a shade over 12 seconds later I have 4 SQL instances ready for me 🙂

02 - creating containers.png

03 - Containers at the ready.png

Storing Credentials

This is not something I would do in a Production environment but I save my credentials using this method that Jaap Brasser b | t shared here

Get-Credential | Export-Clixml -Path $HOME\Documents\sa.cred
which means that I can get the credentials in my PowerShell session (as long as it is the same user that created the file) using
$cred = Import-Clixml $HOME\Documents\sa.cred

Restoring the databases

I restored all of the AdventureWorks databases that each instance will support onto each instance, so 2017 has all of them whilst 2012 only has the 2012 versions.

First I needed to get the filenames of the backup files into a variable

$filenames = (Get-ChildItem '\\bearddockerhost\e$\containers\volumes\sqlbackups\_data').Name

and the container connection strings, which are the hostname and the port number

$containers = 'bearddockerhost,15789', 'bearddockerhost,15788', 'bearddockerhost,15787', 'bearddockerhost,15786'
then I can restore the databases using dbatools using a switch statement on the version which I get with the NameLevel property of Get-DbaSqlBuildReference-
$cred = Import-Clixml $HOME\Documents\sa.cred
$containers = 'bearddockerhost,15789', 'bearddockerhost,15788', 'bearddockerhost,15787', 'bearddockerhost,15786'
$filenames = (Get-ChildItem '\\bearddockerhost\e$\containers\volumes\sqlbackups\_data').Name
$containers.ForEach{
    $Container = $Psitem
    $NameLevel = (Get-DbaSqlBuildReference-SqlInstance $Container-SqlCredential $cred).NameLevel
    switch ($NameLevel) {
        2017 {
            Restore-DbaDatabase-SqlInstance $Container-SqlCredential $cred-Path C:\sqlbackups\ -useDestinationDefaultDirectories -WithReplace |Out-Null
            Write-Verbose-Message "Restored Databases on 2017"
        }
        2016 {
            $Files = $Filenames.Where{$PSitem -notlike '*2017*'}.ForEach{'C:\sqlbackups\' + $Psitem}
            Restore-DbaDatabase-SqlInstance $Container-SqlCredential $cred-Path $Files-useDestinationDefaultDirectories -WithReplace
            Write-Verbose-Message "Restored Databases on 2016"
        }
        2014 {
            $Files = $Filenames.Where{$PSitem -notlike '*2017*' -and $Psitem -notlike '*2016*'}.ForEach{'C:\sqlbackups\' + $Psitem}
            Restore-DbaDatabase-SqlInstance $Container-SqlCredential $cred-Path $Files-useDestinationDefaultDirectories -WithReplace
            Write-Verbose-Message "Restored Databases on 2014"
        }
        2012 {
            $Files = $Filenames.Where{$PSitem -like '*2012*'}.ForEach{'C:\sqlbackups\' + $Psitem}
            Restore-DbaDatabase-SqlInstance $Container-SqlCredential $cred-Path $Files-useDestinationDefaultDirectories -WithReplace
            Write-Verbose-Message "Restored Databases on 2012"
        }
        Default {}
    }
}
I need to create the file paths for each backup file by getting the correct backups and appending the names to C:\SQLBackups which is where the volume is mounted inside the container
As Get-DbaDatabase gives the container ID as the Computer Name I have highlighted each container below
04 - databases.png
That is how easy it is to create a number of SQL containers of differing versions for your presentations or exploring needs
Happy Automating!

dbatools with SQL on Docker and running SQL queries

I had a question from my good friend Andrew Pruski dbafromthecold on twitter or SQL Container Man as I call him 🙂

How do you guys run SQL Commands in dbatools

I will answer that at the bottom of this post, but during our discussion Andrew said he wanted to show the version of the SQL running in the Docker Container.

Thats easy I said. Here’s how to do it

You need to have installed Docker first see this page You can switch to using Windows containers right-clicking on the icon in the taskbar and choosing the command. If you have not already, then pull the SQL 2017 SQL image using

docker pull microsoft/mssql-server-windows-developer:latest

This may take a while to download and extract the image but its worth it, you will be able to spin up new SQL instances in no time

You can create a new SQL Docker container like this

docker run -d -p 15789:1433 --env ACCEPT_EULA=Y --env sa_password=SQL2017Password01 --name SQL2017 microsoft/mssql-server-windows-developer:latest

In only a few seconds you have a SQL 2017 instance up and running (Take a look at Andrews blog at dbafromthecold.com for a great container series with much greater detail)

Now that we have our container we need to connect to it. We need to gather the IPAddress. We can do this using docker command docker inspect but I like to make things a little more programmatical. This works for my Windows 10 machine for Windows SQL Containers. There are some errors with other machines it appears but there is an alternative below

$inspect = docker inspect SQL2017
<#
IPAddress": matches the characters IPAddress": literally (case sensitive)
\s matches any whitespace character (equal to [\r\n\t\f\v ])
" matches the character " literally (case sensitive)
1st Capturing Group (\d{1,3}.\d{1,3}.\d{1,3}.\d{1,3})
\d{1,3} matches a digit (equal to [0-9])
. matches any character (except for line terminators)
\d{1,3} matches a digit (equal to [0-9])
. matches any character (except for line terminators)
\d{1,3} matches a digit (equal to [0-9])
. matches any character (except for line terminators)
\d{1,3} matches a digit (equal to [0-9])
#>
$IpAddress = [regex]::matches($inspect,"IPAddress`":\s`"(\d{1,3}.\d{1,3}.\d{1,3}.\d{1,3})").groups[1].value

Those two lines of code (and several lines of comments) puts the results of the docker inspect command into a variable and then uses regex to pull out the IP Address

If you are getting errors with that you can also use

$IPAddress =docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' containername

Thanks Andrew 🙂

Now we just need our credentials to connect to the instance

$cred = Get-Credential -UserName SA -Message "Enter SA Password Here"

and we can connect to our SQL container

$srv = Connect-DbaInstance -SqlInstance $IpAddress -Credential $cred

and get the version

$srv.Version

and many many other properties, just run

$srv | Get-Member

to see them. At the bottom, you will see a ScriptMethod called Query, which means that you can do things like

$Query = @"
SELECT @@Version
"@

$srv.Query($Query)

$srv.Query($Query).column1

Which looks like

It’s slightly different with a Linux SQL container. Switch Docker to run Linux containers by right-clicking on the icon in the taskbar and choosing the command to switch.
If you haven’t already pull the Linux SQL image
docker pull microsoft/mssql-server-linux:2017-latest

and then create a container

docker run -d -p 15789:1433 --env ACCEPT_EULA=Y --env SA_PASSWORD=SQL2017Password01 --name linuxcontainer microsoft/mssql-server-linux:2017-latest

Now we just need to connect with localhost and the port number which we have specified already and we can connect again

$LinuxSQL = 'Localhost,15789'
$linuxsrv = Connect-DbaInstance -SqlInstance $LinuxSQL -Credential $cred
$linuxsrv.Version
$linuxsrv.HostDistribution
$linuxsrv.Query($query).column1

Of course, this isn’t restricted just Connect-DbaInstance you can do this with any dbatools commands

Get-DbaDatabase -SqlInstance $LinuxSQL -SqlCredential $cred

Go and explore your Docker SQL conatiners with dbatools 🙂

You can get it using

Install-Module dbatools

and find commands with

Find-DbaCommand database

Don’t forget to use Get-Help with the name of the command to get information about how to use it

Get-Help Find-DbaCommand -detailed

Enjoy 🙂