I have to start here. For the longest time, whenever anyone has asked me how I store my credentials for use in my demos and labs I have always referred them to Jaap Brassers t blog post
When people wanted a method of storing credentials that didnt involve files on disk I would suggest Joel Bennett’s t module BetterCredentials which uses the Windows Credential Manager
Sydney t gave a presentation at the European PowerShell Conference which you can watch on Youtube.
Good Bye Import-CliXML
So now I say, it is time to stop using Import-Clixml for storing secrets and use the Microsoft.PowerShell.SecretsManagement module instead for storing your secrets.
Notebooks are as good as blog posts
I love notebooks and to show some people who had asked about storing secrets, I have created some. So, because I am efficient lazy I have embedded them here for you to see. You can find them in my Jupyter Notebook repository
restore the AdventureWorks database to use the /var/opt/sqlserver directory and run a workload after a while the container stops and when you examine the logs you find
I had a whole load of these errors
2019-04-02 20:48:24.73 spid58 Error: 17053, Severity: 16, State: 1.
2019-04-02 20:48:24.73 spid58 FCB::MakePreviousWritesDurable: Operating system error (null) encountered.
2019-04-02 20:48:24.74 spid58 Error: 9001, Severity: 21, State: 1.
2019-04-02 20:48:24.74 spid58 The log for database 'AdventureWorks2014' is not available. Check the operating system error log for related error messages. Resolve any errors and restart the database.
2019-04-02 20:48:25.05 spid58 Error: 9001, Severity: 21, State: 16.
2019-04-02 20:48:25.05 spid58 The log for database 'AdventureWorks2014' is not available. Check the operating system error log for related error messages. Resolve any errors and restart the database.
2019-04-02 20:48:25.06 spid52 Error: 9001, Severity: 21, State: 16.
2019-04-02 20:48:25.06 spid52 The log for database 'AdventureWorks2014' is not available. Check the operating system error log for related error messages. Resolve any errors and restart the database.
Then some of these
019-04-02 20:55:16.26 spid53 Error: 17053, Severity: 16, State: 1.
2019-04-02 20:55:16.26 spid53 /var/opt/sqlserver/AdventureWorks2014_Data.mdf: Operating system error 31(A device attached to the system is not functioning.) encountered.
Then it went really bad
2019-04-02 20:55:16.35 spid53 Error: 3314, Severity: 21, State: 3.
2019-04-02 20:55:16.35 spid53 During undoing of a logged operation in database 'AdventureWorks2014' (page (0:0) if any), an error occurred at log record ID (65:6696:25). Typically, the specific failure is logged previously as an error in the operating system error log. Restore the database or file from
a backup, or repair the database.
2019-04-02 20:55:16.37 spid53 Database AdventureWorks2014 was shutdown due to error 3314 in routine 'XdesRMReadWrite::RollbackToLsn'. Restart for non-snapshot databases will be attempted after all connections to the database are aborted.
Restart packet created for dbid 5.
2019-04-02 20:55:16.41 spid53 Error during rollback. shutting down database (location: 1).
after that it tried to restart the database
2019-04-02 20:55:16.44 spid53 Error: 3314, Severity: 21, State: 3.
2019-04-02 20:55:16.44 spid53 During undoing of a logged operation in database 'AdventureWorks2014' (page (0:0) if any), an error occurred at log record ID (65:6696:25). Typically, the specific failure is logged previously as an error in the operating system error log. Restore the database or file from
a backup, or repair the database.
2019-04-02 20:55:16.49 spid53 Error: 3314, Severity: 21, State: 5.
2019-04-02 20:55:16.49 spid53 During undoing of a logged operation in database 'AdventureWorks2014' (page (0:0) if any), an error occurred at log record ID (65:6696:1). Typically, the specific failure
is logged previously as an error in the operating system error log. Restore the database or file from a backup, or repair the database.
Restart packet processing for dbid 5.
2019-04-02 20:55:17.04 spid52 [5]. Feature Status: PVS: 0. CTR: 0. ConcurrentPFSUpdate: 0.
2019-04-02 20:55:17.06 spid52 Starting up database 'AdventureWorks2014'.
But that caused
2019-04-02 20:55:17.90 spid76 Error: 9001, Severity: 21, State: 16.
2019-04-02 20:55:17.90 spid76 The log for database 'master' is not available. Check the operating
system error log for related error messages. Resolve any errors and restart the database.
Master eh? Now what will you do?
2019-04-02 20:55:25.55 spid52 29 transactions rolled forward in database 'AdventureWorks2014' (5:0). This is an informational message only. No user action is required.
2019-04-02 20:55:25.90 spid52 1 transactions rolled back in database 'AdventureWorks2014' (5:0). This is an informational message only. No user action is required.
2019-04-02 20:55:25.90 spid52 Recovery is writing a checkpoint in database 'AdventureWorks2014' (5). This is an informational message only. No user action is required.
2019-04-02 20:55:26.16 spid52 Recovery completed for database AdventureWorks2014 (database ID 5) in 7 second(s) (analysis 424 ms, redo 5305 ms, undo 284 ms.) This is an informational message only. No user action is required.
2019-04-02 20:55:26.21 spid52 Parallel redo is shutdown for database 'AdventureWorks2014' with worker pool size [1].
2019-04-02 20:55:26.27 spid52 CHECKDB for database 'AdventureWorks2014' finished without errors on 2018-03-24 00:38:39.313 (local time). This is an informational message only; no user action is required.
Interesting, then back to this.
2019-04-02 21:00:00.57 spid51 Error: 17053, Severity: 16, State: 1.
2019-04-02 21:00:00.57 spid51 FCB::MakePreviousWritesDurable: Operating system error (null) encountered.
2019-04-02 21:00:00.62 spid51 Error: 9001, Severity: 21, State: 1.
2019-04-02 21:00:00.62 spid51 The log for database 'AdventureWorks2014' is not available. Check the operating system error log for related error messages. Resolve any errors and restart the database.
2019-04-02 21:00:00.64 spid51 Error: 9001, Severity: 21, State: 16.
It did all that again before
This program has encountered a fatal error and cannot continue running at Tue Apr 2 21:04:08 2019
The following diagnostic information is available:
Reason: 0x00000004
Message: RETAIL ASSERT: Expression=(false) File=Thread.cpp Line=4643 Description=Timed out waiting for thread terminate/suspend/resume.
Stacktrace: 000000006af30187 000000006af2836a 000000006ae4a4d1
000000006ae48c55 000000006af6ab5e 000000006af6ac04
00000002809528df
Process: 7 - sqlservr
Thread: 129 (application thread 0x1e8)
Instance Id: 215cfcc9-8f69-4869-9a52-5aa44a415a83
Crash Id: 53e98400-33f1-4786-98fd-484f0c8d9a7e
Build stamp: 0e53295d0e1704ae5b221538dd6e2322cd46134e0cc32be49c887ca84cdb8c10
Distribution: Ubuntu 16.04.6 LTS
Processors: 2
Total Memory: 4906205184 bytes
Timestamp: Tue Apr 2 21:04:08 2019
Ubuntu 16.04.6 LTS
Capturing core dump and information to /var/opt/mssql/log...
/usr/bin/find: '/proc/7/task/516': No such file or directory
dmesg: read kernel buffer failed: Operation not permitted
No journal files were found.
No journal files were found.
Attempting to capture a dump with paldumper
WARNING: Capture attempt failure detected
Attempting to capture a filtered dump with paldumper
WARNING: Attempt to capture dump failed. Reference /var/opt/mssql/log/core.sqlservr.7.temp/log/paldumper-debug.log for details
Attempting to capture a dump with gdb
WARNING: Unable to capture crash dump with GDB. You may need to
allow ptrace debugging, enable the CAP_SYS_PTRACE capability, or
run as root.
failing to capture it’s dump!! Oops 🙂
I had to recreate the containers without using the named volumes and then I could run my workload 🙂
Nothing particularly useful about this blog post other than an interesting look at the error log when things go wrong 🙂
His instructions worked perfectly and I thought I would try them using a docker-compose file as I like the ease of spinning up containers with them.
I created a docker-compose file like this which will map my backup folder on my Windows 10 laptop to a directory on the container and two more folders to the system folders on the container in the same way as Andrew has in his blog.
This will build the containers as defined in the docker-compose file. The -d runs the container in the background. This was the result.
UPDATE – 2019-03-27
I have no idea why, but today it has worked as expected using the above docker-compose file. I had tried this a couple of times, restarted docker and restarted my laptop and was consistently getting the results below – however today it has worked
So feel free to carry on reading, it’s a fun story and it shows how you can persist the databases in a new container but the above docker-compose has worked!
This is an evaluation version. There are [153] days left in the evaluation period. This program has encountered a fatal error and cannot continue running at Tue Mar 26 19:40:35 20 19 The following diagnostic information is available: Reason: 0x00000006 Status: 0x40000015 Message: Kernel bug check Address: 0x6b643120 Parameters: 0x10861f680 Stacktrace: 000000006b72d63f 000000006b64317b 000000006b6305ca 000000006b63ee02 000000006b72b83a 000000006b72a29d 000000006b769c02 000000006b881000 000000006b894000 000000006b89c000 0000000000000001 Process: 7 – sqlservr Thread: 11 (application thread 0x4) Instance Id: e01b154f-7986-42c6-ae13-c7d34b8b257d Crash Id: 8cbb1c22-a8d6-4fad-bf8f-01c6aa5389b7 Build stamp: 0e53295d0e1704ae5b221538dd6e2322cd46134e0cc32be49c887ca84cdb8c10 Distribution: Ubuntu 16.04.6 LTS Processors: 2 Total Memory: 4906205184 bytes Timestamp: Tue Mar 26 19:40:35 2019 Ubuntu 16.04.6 LTS Capturing core dump and information to /var/opt/mssql/log… dmesg: read kernel buffer failed: Operation not permitted No journal files were found. No journal files were found. Attempting to capture a dump with paldumper WARNING: Capture attempt failure detected Attempting to capture a filtered dump with paldumper WARNING: Attempt to capture dump failed. Reference /var/opt/mssql/log/core.sqlservr.7.temp/log/ paldumper-debug.log for details Attempting to capture a dump with gdb WARNING: Unable to capture crash dump with GDB. You may need to allow ptrace debugging, enable the CAP_SYS_PTRACE capability, or run as root.
which told me that …………. it hadn’t worked. So I removed the containers with
docker-compose down
I thought I would create the volumes ahead of time like Andrew’s blog had mentioned with
and then ran docker-compose up without the -d flag so that I could see all of the output
You can see in the output that the system database files are being moved. Thatlooks like it is working so I used CTRL + C to stop the container and return the terminal. I then ran docker-compose up -d and
I created a special database for Andrew.
This made me laugh out loud…as there's a strong possibility that could happen https://t.co/sh0pnhtPQy
— Andrew Pruski 🏴 🇮🇪 (@dbafromthecold) March 23, 2019
I could then remove the container with
docker-compose down
To make sure there is nothing up my sleeve I altered the docker-compose file to use a different name and port but kept the volume definitions the same.
I ran docker-compose up -d again and connected to the new container and lo and behold the container is still there
So after doing this, I have learned that to persist the databases and to use docker-compose files I had to map the volume to the mountpoint of the docker volume. Except I haven’t, I have learned that sometimes weird things happen with Docker on my laptop!!
It reminded me that I do something very similar to test dbachecks code changes. I thought this might make a good blog post. I will talk through how I do this locally as I merge a PR from another great friend Cláudio Silva who has added agent job history checks.
GitHub PR VS Code Extension
I use the GitHub Pull Requests extension for VS Code to work with pull requests for dbachecks. This enables me to see all of the information about the Pull Request, merge it, review it, comment on it all from VS Code
I can also see which files have been changed and which changes have been made
Once I am ready to test the pull request I perform a checkout using the extension
This will update all of the files in my local repository with all of the changes in this pull request
You can see at the bottom left that the branch changes from development to the name of the PR.
Running The Unit Tests
The first thing that I do is to run the Unit Tests for the module. These will test that the code is following all of the guidelines that we require and that the tests are formatted in the correct way for the Power Bi to parse. I have blogged about this here and here and we use this Pester in our CI process in Azure DevOps which I described here.
I navigate to the root of the dbachecks repository on my local machine and run
Thank you Cláudio, the code has passed the tests 😉
Running Some Integration Tests
The difference between Unit tests and Integration tests in a nutshell is that the Unit tests are testing that the code is doing what is expected without any other external influences whilst the Integration tests are checking that the code is doing what is expected when running on an actual environment. In this scenario we know that the code is doing what is expected but we want to check what it does when it runs against a SQL Server and even when it runs against multiple SQL Servers of different versions.
Multiple Versions of SQL Server
As I have described before my friend and former colleague Andrew Pruski b | t has many resources for running SQL in containers. This means that I can quickly and easily create fresh uncontaminated instances of SQL 2012, 2014, 2016 and 2017 really quickly.
I can create 4 instances of different versions of SQL in (a tad over) 1 minute. How about you?
Imagine how long it would take to run the installers for 4 versions of SQL and the pain you would have trying to uninstall them and make sure everything is ‘clean’. Even images that have been sysprep’d won’t be done in 1 minute.
Docker Compose Up ?
So what is this magic command that has enabled me to do this? docker compose uses a YAML file to define multi-container applications. This means that with a file called docker-compose.yml like thish
and 4 SQL containers are available to you. You can interact with them via SSMS if you wish with localhost comma PORTNUMBER. The port numbers in the above file are 15586, 15587,15588 and 15589
Now it must be noted, as I describe here that first I pulled the images to my laptop. The first time you run docker compose will take significantly longer if you haven’t pulled the images already (pulling the images will take quite a while depending on your broadband speed)
Credential
The next thing is to save a credential to make it easier to automate. I use the method described by my PowerShell friend Jaap Brasser here. I run this code
Now I can start to run my Integration tests. First reset the dbachecks configuration and set some configuration values
# run the checks against these instances
$null = Set-DbcConfig -Name app.sqlinstance $containers
# We are using SQL authentication
$null = Set-DbcConfig -Name policy.connection.authscheme -Value SQL
# sometimes its a bit slower than the default value
$null = Set-DbcConfig -Name policy.network.latencymaxms -Value 100 # because the containers run a bit slow!
Then I will run the dbachecks connectivity checks and save the results to a variable without showing any output
I can then use Pester to check that dbachecks has worked as expected by testing if the failedcount property returned is 0.
Describe "Testing the checks are running as expected" -Tag Integration {
Context "Connectivity Checks" {
It "All Tests should pass" {
$ConnectivityTests.FailedCount | Should -Be 0 -Because "We expect all of the checks to run and pass with default settings"
}
}
}
What is the Unit Test for this PR?
Next I think about what we need to be testing for the this PR. The Unit tests will help us.
Choose some Integration Tests
This check is checking the Agent job history settings and the unit tests are
It “Passes Check Correctly with Maximum History Rows disabled (-1)”
It “Fails Check Correctly with Maximum History Rows disabled (-1) but configured value is 1000”
It “Passes Check Correctly with Maximum History Rows being 10000”
It “Fails Check Correctly with Maximum History Rows being less than 10000”
It “Passes Check Correctly with Maximum History Rows per job being 100”
It “Fails Check Correctly with Maximum History Rows per job being less than 100”
So we will check the same things on real actual SQL Servers. First though we need to start the SQL Server Agent as it is not started by default. We can do this as follows
Unfortunately, the agent service wont start in the SQL 2014 container so I cant run agent integration tests for that container but it’s better than no integration tests.
This is What We Will Test
So we want to test if the check will pass with default settings. In general, dbachecks will pass for default instance, agent or database settings values by default.
We also want the check to fail if the configured value for dbachecks is set to default but the value has been set on the instance.
We want the check to pass if the configured value for the dbachecks configuration is set and the instance (agent, database) setting matches it.
If You Are Doing Something More Than Once ……
Let’s automate that. We are going to be repeatedly running those three tests for each setting that we are running integration tests for. I have created 3 functions for this again checking that FailedCount or Passed Count is 0 depending on the test.
function Invoke-DefaultCheck {
It "All Checks should pass with default for $Check" {
$Tests = get-variable "$($Check)default" -ValueOnly
$Tests.FailedCount | Should -Be 0 -Because "We expect all of the checks to run and pass with default setting (Yes we may set some values before but you get my drift)"
}
}
function Invoke-ConfigCheck {
It "All Checks should fail when config changed for $Check" {
$Tests = get-variable "$($Check)configchanged" -ValueOnly
$Tests.PassedCount | Should -Be 0 -Because "We expect all of the checks to run and fail when we have changed the config values"
}
}
function Invoke-ValueCheck {
It "All Checks should pass when setting changed for $Check" {
$Tests = get-variable "$($Check)valuechanged" -ValueOnly
$Tests.FailedCount | Should -Be 0 -Because "We expect all of the checks to run and pass when we have changed the settings to match the config values"
}
}
Now I can use those functions inside a loop in my Integration Pester Test
And then we will check that all of the checks are passing and failing as expected
Invoke-Pester .\DockerTests.ps1
Integration Test For Error Log Counts
There is another integration test there for the error logs count. This works in the same way. Here is the code
#region error Log Count - PR 583
# default test
$errorlogscountdefault = Invoke-DbcCheck -SqlCredential $cred -Check ErrorLogCount -Show None -PassThru
# set a value and then it will fail
$null = Set-DbcConfig -Name policy.errorlog.logcount -Value 10
$errorlogscountconfigchanged = Invoke-DbcCheck -SqlCredential $cred -Check ErrorLogCount -Show None -PassThru
# set the value and then it will pass
$null = Set-DbaErrorLogConfig -SqlInstance $containers -SqlCredential $cred -LogCount 10
$errorlogscountvaluechanged = Invoke-DbcCheck -SqlCredential $cred -Check ErrorLogCount -Show None -PassThru
#endregion
Merge the Changes
So with all the tests passing I can merge the PR into the development branch and Azure DevOps will start a build. Ultimately, I would like to add the integration to the build as well following André‘s blog post but for now I used the GitHub Pull Request extension to merge the pull request into development which started a build and then merged that into master which signed the code and deployed it to the PowerShell gallery as you can see here and the result is
Just for fun I decided to spend Christmas Eve getting Windows and Linux SQL containers running together.
WARNING
This is NOT a production ready solution, in fact I would not even recommend that you try it. I definitely wouldn’t recommend it on any machine with anything useful on it that you want to use again. We will be using a re-compiled dockerd.exe created by someone else and you know the rules about downloading things from the internet don’t you? and trusting unknown unverified people?
Maybe you can try this in an Azure VM or somewhere else safe.
Anyway, with that in mind, lets go.
Linux Containers On Windows
You can run Linux containers on Windows in Docker as follows. You need to be running the latest Docker for Windows.
Right click on the whale in the task bar and select Settings
Notice that I am running Windows Containers as there is a switch to Linux containers option. If you see Switch to Windows containers then click that first.
Click on Daemon and then tick the experimental features tick box and press apply.
Docker will restart and you can now run Linux containers alongside windows containers.
So you you can pull the Ubuntu container with
docker pull ubuntu:18.04
and then you can run it with
docker run -it --name ubuntu ubuntu:18.04
There you go one Linux container running 🙂 A good resource for learning bash for SQL Server DBAs is Kellyn Pot’Vin-Gorman b | tseries on Simple Talk
Type Exit to get out of the container and to remove it
When you do, the command will finish successfully but the container won’t be started (as can been seen by the red dot in the docker explorer).
If you look at the logs for the container. (I am lazy, I right click on the container and choose show logs in VS Code 🙂 ) you will see
sqlservr: This program requires a machine with at least 2000 megabytes of memory. /opt/mssql/bin/sqlservr: This program requires a machine with at least 2000 megabytes of memory.
Now, if you are running Linux containers, this is an easy fix. All you have to do is to right click on the whale in the taskbar, choose Settings, Advanced and move the slider for the Memory and click apply.
But in Windows containers that option is not available.
If you go a-googling you will find that Shawn Melton created an issue for this many months ago, which gets referenced by this issue for the guest compute service, which references this PR in moby. But as this hasn’t been merged into master yet it is not available. I got bored of waiting for this and decided to look a bit deeper today.
Get It Working Just For Fun
So, you read the warning at the top?
Now let’s get it working. I take zero credit here. All of the work was done by Brian Weeteling b | G in this post
So you can follow Brians examples and check out the source code and compile it as he says or you can download the exe that he has made available (remember the warning?)
Stop Docker for Windows, and with the file downloaded and unzipped, open an admin PowerShell and navigate to the directory the dockerd.exe file is and run
.\dockerd.exe
You will get an output like this and it will keep going for a while.
Leave this window open whilst you are using Docker like this. Once you see
Then open a new PowerShell window or VS Code. You will need to run it as admin. I ran
docker ps-a
to see if it was up and available.
I also had to create a bootx64.efi file at C:\Program Files\Linux Containers which I did by copying and renaming the kernel file in that folder.
Now I can use a docker-compose file to create 5 containers. Four will be Windows containers from Andrews Docker hub repositories or Microsoft’s Docker Hub for SQL 2012, SQL 2014, SQL 2016, and SQL 2017 and one will be the latest Ubuntu SQL 2019 CTP 2.2 image. Note that you have to use version 2.4 of docker compose as the platform tag is not available yet in any later version, although it is coming to 3.7 soon.
Save this code as docker-compose.yml and navigate to the directory in an admin PowerShell or VS Code and run
docker-compose up -d
and now I have Windows and Linux SQL containers running together. This means that I can test some code against all versions of SQL from 2012 to 2019 easily in containers 🙂
So that is just a bit of fun.
To return to the normal Docker, simply CTRL and C the admin PowerShell you ran .\dockerd.exe in and you will see the logs showing it shutting down.
You will then be able to start Docker For Windows as usual.
I look forward to the time, hopefully early next year when all of the relevant PR’s have been merged and this is available in Docker for Windows.
I wanted to create containers running SQL2017, SQL2016, SQL2014 and SQL2012 and restore versions of the AdventureWorks database onto each one.
Move Docker Location
I redirected my docker location from my C:\ drive to my E:\ drive so I didnt run out of space. I did this by creating a daemon.json file in C:\ProgramData\docker\config and adding
{"data-root": "E:\\containers"}
and restarting the docker service which created folders like this
Then I ran
docker volume create SQLBackups
to create a volume to hold the backups that I could mount on the containers
I also needed the images for other versions. My good friend Andrew Pruski b | t has versions available for us to use on his Docker Hub so it is just a case of running
which means that I can get the credentials in my PowerShell session (as long as it is the same user that created the file) using
$cred = Import-Clixml $HOME\Documents\sa.cred
Restoring the databases
I restored all of the AdventureWorks databases that each instance will support onto each instance, so 2017 has all of them whilst 2012 only has the 2012 versions.
First I needed to get the filenames of the backup files into a variable
then I can restore the databases using dbatools using a switch statement on the version which I get with the NameLevel property of Get-DbaSqlBuildReference-
I need to create the file paths for each backup file by getting the correct backups and appending the names to C:\SQLBackups which is where the volume is mounted inside the container
As Get-DbaDatabase gives the container ID as the Computer Name I have highlighted each container below
That is how easy it is to create a number of SQL containers of differing versions for your presentations or exploring needs
I had a question from my good friend Andrew Pruski dbafromthecold on twitter or SQL Container Man as I call him 🙂
How do you guys run SQL Commands in dbatools
I will answer that at the bottom of this post, but during our discussion Andrew said he wanted to show the version of the SQL running in the Docker Container.
Thats easy I said. Here’s how to do it
You need to have installed Docker first see this page You can switch to using Windows containers right-clicking on the icon in the taskbar and choosing the command. If you have not already, then pull the SQL 2017 SQL image using
In only a few seconds you have a SQL 2017 instance up and running (Take a look at Andrews blog at dbafromthecold.com for a great container series with much greater detail)
Now that we have our container we need to connect to it. We need to gather the IPAddress. We can do this using docker command docker inspect but I like to make things a little more programmatical. This works for my Windows 10 machine for Windows SQL Containers. There are some errors with other machines it appears but there is an alternative below
$inspect = docker inspect SQL2017
<#
IPAddress": matches the characters IPAddress": literally (case sensitive)
\s matches any whitespace character (equal to [\r\n\t\f\v ])
" matches the character " literally (case sensitive)
1st Capturing Group (\d{1,3}.\d{1,3}.\d{1,3}.\d{1,3})
\d{1,3} matches a digit (equal to [0-9])
. matches any character (except for line terminators)
\d{1,3} matches a digit (equal to [0-9])
. matches any character (except for line terminators)
\d{1,3} matches a digit (equal to [0-9])
. matches any character (except for line terminators)
\d{1,3} matches a digit (equal to [0-9])
#>
$IpAddress = [regex]::matches($inspect,"IPAddress`":\s`"(\d{1,3}.\d{1,3}.\d{1,3}.\d{1,3})").groups[1].value
Those two lines of code (and several lines of comments) puts the results of the docker inspect command into a variable and then uses regex to pull out the IP Address
If you are getting errors with that you can also use
It’s slightly different with a Linux SQL container. Switch Docker to run Linux containers by right-clicking on the icon in the taskbar and choosing the command to switch.