I had set the Network security rules to accept connections only from my static IP using variables in the Build Pipeline. I use MobaXterm as my SSH client. Its a free download. I click on sessions
Choose a SSH session and fill in the remote host address from the portal
fill in the password and
Configuring SQL
The next task is to configure the SQL installation. Following the instructions on the Microsoft docs site I run
Azure Data Studio is a cross-platform database tool for data professionals using the Microsoft family of on-premises and cloud data platforms on Windows, MacOS, and Linux.
Recently Vicky Harp tweeted
We're getting very close to release of SQL Notebooks in @AzureDataStudio! You can give the feature an early spin today with the insider build. pic.twitter.com/SEZp7ZdxCp
See the change log for additional details of what’s in this release.
Once you have installed you can connect to an instance, right click and choose New Notebook or you can use File – New Notebook
Incidentally, I use the docker-compose file here to create the containers and I map C:\MSSQL\BACKUP\KEEP on my local machine (where my backups are) to /var/opt/mssql/backups on the containers on lines 10 and 17 of the docker-compose so change as required . If you want to follow along then put the ValidationResults.bak in the folder on your local machine. The Create-Ag.ps1 shows the code and creates an AG with dbatools. But I digress!
Install Notebook Dependencies
Once you click New Notebook you will get a prompt to install the dependencies.
It will show its output
and take a few minutes to run
It took all but 11 minutes on my machine
#
Create a Notebook
OK, so now that we have the dependencies installed we can create a notebook. I decided to use the ValidationResults database that I use for my dbachecks demos and describe here. I need to restore it from my local folder that I have mapped as a volume to my container. Of course, I use dbatools for this 🙂
I had already got a connection saved to the instance in Azure Data Studio, you may need to create a new one using the new connection icon at the top left and filling in the details. The password is in the code above.
Now I can start with my notebook. I am faced with this
I click on text and provide an intro
Once I had written that and clicked out, I couldn’t see what to do straight away!
Then I saw the code and text buttons at the top 🙂 Right, lets get on with it 🙂 I hit the code button and paste in the T-SQL to reset the dates in the database to simulate dbachecks having been run this morning.
There’s a run cell button on the right and when I press it
Cool 🙂
If the SQL query has results then they are shown as well
This is fun and I can see plenty of uses for it. Go and have a play with SQL notebooks 🙂
Source Control
I used CTRL K, CTRL O to open a folder and saved my notebook in my local Presentations folder which is source controlled. When I opened the explorer CTRL + SHIFT + E I can see that the folder and the file are colour coded green and have a U next to them marking them as Untracked. I can also see that the source control icon has a 1 for the number of files with changes and in the bottom left that I am in the master branch.
If I click on the source control icon (or CTRL + SHIFT + G) I can see the files with the changes and can enter a commit message
I then press CTRL + ENTER to commit my change and get this pop-up
As I only have one file and it has all the changes for this commit I click yes. If I had changed more than one file and only wanted to commit a single one at a time I would hover my mouse over the file and click the + to stage my change.
If I make a further change to the notebook and save it, I can see that the source control provider recognises the change but this time the folder the file is in and the file are colour coded brown with an M to show that they have been modified.
Unlike Visual Studio Code, when you then click on the source control icon and click on the change it does not show the differences in the notebook although this works with SQL files.
When I have made all my changes and committed them with good commit messages
I can see that there are 3 local changes ready to be pushed to by remote repository (GitHub in this case) and 0 remote commits in this branch by looking at the bottom left
I can click on the “roundy roundy” icon (I dont know its proper name 😊) and synchronise my changes. This comes with a pop-up
Personally I never press OK, Don’t Show Again because I like the double check and to think “Is this really what I want to do right now”. Once I press OK my changes will be synched with the remote repository. Explaining this means that you can find the notebook I have used in my Presentations GitHub Repository which means that you can run the Notebook too using the docker-compose file here and the instructions further up in the post.
It reminded me that I do something very similar to test dbachecks code changes. I thought this might make a good blog post. I will talk through how I do this locally as I merge a PR from another great friend Cláudio Silva who has added agent job history checks.
GitHub PR VS Code Extension
I use the GitHub Pull Requests extension for VS Code to work with pull requests for dbachecks. This enables me to see all of the information about the Pull Request, merge it, review it, comment on it all from VS Code
I can also see which files have been changed and which changes have been made
Once I am ready to test the pull request I perform a checkout using the extension
This will update all of the files in my local repository with all of the changes in this pull request
You can see at the bottom left that the branch changes from development to the name of the PR.
Running The Unit Tests
The first thing that I do is to run the Unit Tests for the module. These will test that the code is following all of the guidelines that we require and that the tests are formatted in the correct way for the Power Bi to parse. I have blogged about this here and here and we use this Pester in our CI process in Azure DevOps which I described here.
I navigate to the root of the dbachecks repository on my local machine and run
Thank you Cláudio, the code has passed the tests 😉
Running Some Integration Tests
The difference between Unit tests and Integration tests in a nutshell is that the Unit tests are testing that the code is doing what is expected without any other external influences whilst the Integration tests are checking that the code is doing what is expected when running on an actual environment. In this scenario we know that the code is doing what is expected but we want to check what it does when it runs against a SQL Server and even when it runs against multiple SQL Servers of different versions.
Multiple Versions of SQL Server
As I have described before my friend and former colleague Andrew Pruski b | t has many resources for running SQL in containers. This means that I can quickly and easily create fresh uncontaminated instances of SQL 2012, 2014, 2016 and 2017 really quickly.
I can create 4 instances of different versions of SQL in (a tad over) 1 minute. How about you?
Imagine how long it would take to run the installers for 4 versions of SQL and the pain you would have trying to uninstall them and make sure everything is ‘clean’. Even images that have been sysprep’d won’t be done in 1 minute.
Docker Compose Up ?
So what is this magic command that has enabled me to do this? docker compose uses a YAML file to define multi-container applications. This means that with a file called docker-compose.yml like thish
and 4 SQL containers are available to you. You can interact with them via SSMS if you wish with localhost comma PORTNUMBER. The port numbers in the above file are 15586, 15587,15588 and 15589
Now it must be noted, as I describe here that first I pulled the images to my laptop. The first time you run docker compose will take significantly longer if you haven’t pulled the images already (pulling the images will take quite a while depending on your broadband speed)
Credential
The next thing is to save a credential to make it easier to automate. I use the method described by my PowerShell friend Jaap Brasser here. I run this code
Now I can start to run my Integration tests. First reset the dbachecks configuration and set some configuration values
# run the checks against these instances
$null = Set-DbcConfig -Name app.sqlinstance $containers
# We are using SQL authentication
$null = Set-DbcConfig -Name policy.connection.authscheme -Value SQL
# sometimes its a bit slower than the default value
$null = Set-DbcConfig -Name policy.network.latencymaxms -Value 100 # because the containers run a bit slow!
Then I will run the dbachecks connectivity checks and save the results to a variable without showing any output
I can then use Pester to check that dbachecks has worked as expected by testing if the failedcount property returned is 0.
Describe "Testing the checks are running as expected" -Tag Integration {
Context "Connectivity Checks" {
It "All Tests should pass" {
$ConnectivityTests.FailedCount | Should -Be 0 -Because "We expect all of the checks to run and pass with default settings"
}
}
}
What is the Unit Test for this PR?
Next I think about what we need to be testing for the this PR. The Unit tests will help us.
Choose some Integration Tests
This check is checking the Agent job history settings and the unit tests are
It “Passes Check Correctly with Maximum History Rows disabled (-1)”
It “Fails Check Correctly with Maximum History Rows disabled (-1) but configured value is 1000”
It “Passes Check Correctly with Maximum History Rows being 10000”
It “Fails Check Correctly with Maximum History Rows being less than 10000”
It “Passes Check Correctly with Maximum History Rows per job being 100”
It “Fails Check Correctly with Maximum History Rows per job being less than 100”
So we will check the same things on real actual SQL Servers. First though we need to start the SQL Server Agent as it is not started by default. We can do this as follows
Unfortunately, the agent service wont start in the SQL 2014 container so I cant run agent integration tests for that container but it’s better than no integration tests.
This is What We Will Test
So we want to test if the check will pass with default settings. In general, dbachecks will pass for default instance, agent or database settings values by default.
We also want the check to fail if the configured value for dbachecks is set to default but the value has been set on the instance.
We want the check to pass if the configured value for the dbachecks configuration is set and the instance (agent, database) setting matches it.
If You Are Doing Something More Than Once ……
Let’s automate that. We are going to be repeatedly running those three tests for each setting that we are running integration tests for. I have created 3 functions for this again checking that FailedCount or Passed Count is 0 depending on the test.
function Invoke-DefaultCheck {
It "All Checks should pass with default for $Check" {
$Tests = get-variable "$($Check)default" -ValueOnly
$Tests.FailedCount | Should -Be 0 -Because "We expect all of the checks to run and pass with default setting (Yes we may set some values before but you get my drift)"
}
}
function Invoke-ConfigCheck {
It "All Checks should fail when config changed for $Check" {
$Tests = get-variable "$($Check)configchanged" -ValueOnly
$Tests.PassedCount | Should -Be 0 -Because "We expect all of the checks to run and fail when we have changed the config values"
}
}
function Invoke-ValueCheck {
It "All Checks should pass when setting changed for $Check" {
$Tests = get-variable "$($Check)valuechanged" -ValueOnly
$Tests.FailedCount | Should -Be 0 -Because "We expect all of the checks to run and pass when we have changed the settings to match the config values"
}
}
Now I can use those functions inside a loop in my Integration Pester Test
And then we will check that all of the checks are passing and failing as expected
Invoke-Pester .\DockerTests.ps1
Integration Test For Error Log Counts
There is another integration test there for the error logs count. This works in the same way. Here is the code
#region error Log Count - PR 583
# default test
$errorlogscountdefault = Invoke-DbcCheck -SqlCredential $cred -Check ErrorLogCount -Show None -PassThru
# set a value and then it will fail
$null = Set-DbcConfig -Name policy.errorlog.logcount -Value 10
$errorlogscountconfigchanged = Invoke-DbcCheck -SqlCredential $cred -Check ErrorLogCount -Show None -PassThru
# set the value and then it will pass
$null = Set-DbaErrorLogConfig -SqlInstance $containers -SqlCredential $cred -LogCount 10
$errorlogscountvaluechanged = Invoke-DbcCheck -SqlCredential $cred -Check ErrorLogCount -Show None -PassThru
#endregion
Merge the Changes
So with all the tests passing I can merge the PR into the development branch and Azure DevOps will start a build. Ultimately, I would like to add the integration to the build as well following André‘s blog post but for now I used the GitHub Pull Request extension to merge the pull request into development which started a build and then merged that into master which signed the code and deployed it to the PowerShell gallery as you can see here and the result is
Just for fun I decided to spend Christmas Eve getting Windows and Linux SQL containers running together.
WARNING
This is NOT a production ready solution, in fact I would not even recommend that you try it. I definitely wouldn’t recommend it on any machine with anything useful on it that you want to use again. We will be using a re-compiled dockerd.exe created by someone else and you know the rules about downloading things from the internet don’t you? and trusting unknown unverified people?
Maybe you can try this in an Azure VM or somewhere else safe.
Anyway, with that in mind, lets go.
Linux Containers On Windows
You can run Linux containers on Windows in Docker as follows. You need to be running the latest Docker for Windows.
Right click on the whale in the task bar and select Settings
Notice that I am running Windows Containers as there is a switch to Linux containers option. If you see Switch to Windows containers then click that first.
Click on Daemon and then tick the experimental features tick box and press apply.
Docker will restart and you can now run Linux containers alongside windows containers.
So you you can pull the Ubuntu container with
docker pull ubuntu:18.04
and then you can run it with
docker run -it --name ubuntu ubuntu:18.04
There you go one Linux container running 🙂 A good resource for learning bash for SQL Server DBAs is Kellyn Pot’Vin-Gorman b | tseries on Simple Talk
Type Exit to get out of the container and to remove it
When you do, the command will finish successfully but the container won’t be started (as can been seen by the red dot in the docker explorer).
If you look at the logs for the container. (I am lazy, I right click on the container and choose show logs in VS Code 🙂 ) you will see
sqlservr: This program requires a machine with at least 2000 megabytes of memory. /opt/mssql/bin/sqlservr: This program requires a machine with at least 2000 megabytes of memory.
Now, if you are running Linux containers, this is an easy fix. All you have to do is to right click on the whale in the taskbar, choose Settings, Advanced and move the slider for the Memory and click apply.
But in Windows containers that option is not available.
If you go a-googling you will find that Shawn Melton created an issue for this many months ago, which gets referenced by this issue for the guest compute service, which references this PR in moby. But as this hasn’t been merged into master yet it is not available. I got bored of waiting for this and decided to look a bit deeper today.
Get It Working Just For Fun
So, you read the warning at the top?
Now let’s get it working. I take zero credit here. All of the work was done by Brian Weeteling b | G in this post
So you can follow Brians examples and check out the source code and compile it as he says or you can download the exe that he has made available (remember the warning?)
Stop Docker for Windows, and with the file downloaded and unzipped, open an admin PowerShell and navigate to the directory the dockerd.exe file is and run
.\dockerd.exe
You will get an output like this and it will keep going for a while.
Leave this window open whilst you are using Docker like this. Once you see
Then open a new PowerShell window or VS Code. You will need to run it as admin. I ran
docker ps-a
to see if it was up and available.
I also had to create a bootx64.efi file at C:\Program Files\Linux Containers which I did by copying and renaming the kernel file in that folder.
Now I can use a docker-compose file to create 5 containers. Four will be Windows containers from Andrews Docker hub repositories or Microsoft’s Docker Hub for SQL 2012, SQL 2014, SQL 2016, and SQL 2017 and one will be the latest Ubuntu SQL 2019 CTP 2.2 image. Note that you have to use version 2.4 of docker compose as the platform tag is not available yet in any later version, although it is coming to 3.7 soon.
Save this code as docker-compose.yml and navigate to the directory in an admin PowerShell or VS Code and run
docker-compose up -d
and now I have Windows and Linux SQL containers running together. This means that I can test some code against all versions of SQL from 2012 to 2019 easily in containers 🙂
So that is just a bit of fun.
To return to the normal Docker, simply CTRL and C the admin PowerShell you ran .\dockerd.exe in and you will see the logs showing it shutting down.
You will then be able to start Docker For Windows as usual.
I look forward to the time, hopefully early next year when all of the relevant PR’s have been merged and this is available in Docker for Windows.
To find a command you can use the dbatools command Find-DbaCommand
For commands for service run
Find-DbaCommand Service
There are a whole bundle returned
This is how you can find any dbatools command. There is also a -Tag parameter on Find-DbaCommand.
Find-DbaCommand -Tag Service
This returns
How to use any PowerShell command
Always always start with Get-Help
Get-Help Get-DbaService -Detailed
This will show you all the information about the command including examples 🙂
All of these commands below require that the account running the PowerShell is a Local Admin on the host.
One Host Many Hosts
Now I have used just one host for all of the examples on this page. Do not be fooled, you can always use an array of hosts wherever I have $ComputerName you can set it to as many hosts as you like
$ComputerName = 'SQL0','SQL1'
You can even get those names form a database, Excel sheet, CMS.
Getting the Services
So to get the services on a machine run
$ComputerName = 'Name of Computer'
Get-DbaService -ComputerName $ComputerName
I will use the alias ft for this in some of the examples, that is fine for the command line but use the full command name in any code that you write that other people use
You have an object returned so you can output to anything if you want – CSV, JSON, text file, email, azure storage, database, the world is your oyster.
Getting the Services for one instance
The Get-DbaService command has a number of parameters. There is an InstanceName parameter enabling you to get only the services for one instance. If we just want the default instance services
You can also use the -Type parameter to get only services of a particular type. You can get one of the following: “Agent”,”Browser”,”Engine”,”FullText”,”SSAS”,”SSIS”,”SSRS”, “PolyBase”
You can use Start-DbaService and Stop-DbaService to start and stop the services. They each have ComputerName, InstanceName and Type parameters like Get-DbaService.
The full text service was started with the engine service which is why it gave a warning. You can see this if you have all of the services stopped and just want to start the engine services with the type parameter.
We can stop them by type as well, although this will show an extra requirement. If we start our MIRROR instance services again and then try to stop just the engine type.
You will get a warning due to the dependant services
WARNING: [10:31:02][Update-ServiceStatus] (MSSQL$MIRROR on SQL0) The attempt to stop the service returned the following error: The service cannot be stopped because other services that are running are dependent on it.
WARNING: [10:31:02][Update-ServiceStatus] (MSSQL$MIRROR on SQL0) Run the command with ‘-Force’ switch to force the restart of a dependent SQL Agent
So all you have to do is use the force Luke (or whatever your name is!)
You can also stop the services for an entire host, again you will need the Force parameter.
Start-DbaService -ComputerName $ComputerName |ft
Stop-DbaService -ComputerName $ComputerName -Force | ft
Restarting Services
It will come as no surprise by now to learn that Restart-DbaService follows the same pattern. It also has ComputerName, InstanceName and Type parameters like Get-DbaService, Start-DbaService and Stop-DbaService (Consistency is great, It’s one of the things that is being worked on towards 1.0 as you can see in the Bill of Health)
Again you will need the -Force for dependant services, you can restart all of the services on a host with
Now none of that answers @g-kannan’s question. Restarting only services with a certain service account.
With PowerShell you can pipe commands together so that the results of the first command are piped into the second. So we can get all of the engine services on a host for an instance with Get-DbaService and start them with Start-DbaService like this
You can do the same thing with syntax that may make more sense to you if you are used to T-SQL as follows
(Get-DbaService -ComputerName $ComputerName -Type Engine) | Where State -eq 'Stopped'
and then start only those services you could do
(Get-DbaService -ComputerName $ComputerName -Type Engine) | Where State -eq 'Stopped' | Start-DbaService
(note – you would just use Start-DbaService in this case as it wont start services that are already started!)
# Stop just one of the engine services
Stop-DbaService -ComputerName $ComputerName -InstanceName MIRROR -Type Engine
# Get the engine services
Get-DbaService -ComputerName $ComputerName -Type Engine
# This will only start the one engine service that is stopped
Start-DbaService -ComputerName $ComputerName -Type Engine
Come On Rob! Answer the question!
So now that you know a lot more about these commands, you can restart only the services using a particular service account by using Get-DbaService to get the services
Get-DbaService -ComputerName $ComputerName -Type Engine | Where StartName -eq 'thebeard\sqlsvc'
and then once you know that you have the right ‘query’ you can pipe that to Restart-DbaService (Like making sure your SELECT query returns the correct rows for your WHERE clause before running the DELETE or UPDATE)
Michael Karpenko wrote a comment pointing out that I had not supported named instances, which was correct as it had not been written for that. Thank you Michael 🙂 I have updated the code to deal with named instances.
Confusing results
I also realised as we started testing the code that if you had run the code once and then ran it again against a different availability group the tool does not clear out the data folder that it uses so you can get confusing results.
In the image below I had looked at the default instance and then a MIRROR named instance. As you can see the results json on the left shows the default instance SQLClusterAG while the one on the right shows both the SQLClusterAG and the MirrrAG instance results.
This is not so useful if you don’t notice this at first with the expanded json!! Now you may in this situation want to see the combined results from all of the availability groups on one cluster. You could gather all of the data from each instance and then add it to the data folder easily enough.
By cleaning out the data folder before running the utility the results are as expected.
Archive the data for historical analysis
One of the production DBAs pointed out that having gathered the information, it would be useful to hold it for better analysis of repeated issues. I have added an archiving step so that when the tools runs, if there is already data in the data gathering folder, it will copy that to an archive folder and name it with the date and time that the cluster log was created as this is a good estimation of when the analysis was performed. If an archive folder location is not provided it will create an archive folder in the data folder. This is not an ideal solution though, as the utility will copy all of the files and folders from there to its own location so it is better to define an archive folder in the parameters.
Get-Eventlog is sloooooooooooow
I was running the tools and noticed it sat running the system event log task for a long long time. I ran some tests using a variation of the dbatools prompt.
This will show in the prompt how long it took to run the previous statement .
In the image above (which you can click to get a larger version as with all images on this blog) you can see that it took 18ms to set the date variable, FOUR MINUTES and FORTY THREE seconds to get the system log in the last 2 days using Get-EventLog and 29.1 seconds using Get-WinEvent and a FilterHashtable.
Getting the function
This function requires PowerShell version 5 and the dbatools module.
30/11/2018 – Function has been updated to deal with named instances.
Last week the Tiger Team released their Availability Group Failover Detection Utility which will provide root cause analysis on Cluster Logs, SQL Error Logs, and the Availability groups extended events logs. There is a blog post here and the tool can be downloaded from the Tiger Team GitHub Repository
A Bit of Faffing*
It states on the readme for the Tiger Team GitHub Repository.
Repository for Tiger team for “as-is” solutions and tools/scripts that the team publishes.
The important words are “as-is” sometimes these tools need a bit of faffing some looking after!
There is a pre-requisite and sometimes a little “fixing” that you need to do to get it to run correctly.
First, install the “Microsoft Visual C++ Redistributable for Visual Studio 2017” from here. On the download page, scroll down to the “Other Tools and Frameworks” section to download the redistributable (x64 version).
Then when you run FailoverDetection.exe you may get strong name validation errors like.
Unhandled Exception: System.IO.FileLoadException: Could not load file or assembly ‘Microsoft.Sq1Server.XEvent.Linq, Version=15.0.0.0, Culture=neutral, PublicKeyToken=89845dcd808cc91’ or one of it s dependencies. Strong name validation failed. (Exception from HRESULT; 0x8013141A) – – – >.Security.SecurityException: Strong name validation failed. (Exception from HRESULT: 0x8e13141A)
—End of inner exception stack trace —
at FailoverDetector. XeventParser.LoadXevent(String xelFi1eName, String serverName)
Then you will need to run the sn.exe tool which is in the zip file. Use this syntax.
.\sn.exe -Vr PATHTODLLFile
I had to do it for two DLLs.
NOTE – If you get an error like this when running sn.exe (or any executable) from PowerShell it means that you have missed the .\ (dot whack) in front of the executable name.
* Faffing – Doing something that is a bit awkward See Link .
Logs required for the Tool
To run the Failover Detection Utility you need to gather the following information from each replica and place it in the specified data folder.
SQL error logs
Always On Availability Groups Extended Event Logs
System Health Extended Event Logs
System log
Windows cluster log
Once you have gathered all of that data then you need to alter the configuration file for the executable.
{
"Data Source Path": "Path to Data File",
"Health Level": 3,
"Instances": [
"Replica1",
"Replica2",
"Replica3"
]
}
Running The Tool
Once you have done that you can then run the Failover Detection Utility. You can double click the exe,
or you can run it from the command line.
In both cases it won’t exit so when you see the Saving Results to JSON file, you can press enter (sometimes twice!).
The results can be seen in the JSON file which will be stored in a Results directory in the directory that the the FailoverDetection.exe exists.
You can also use some switches with the FailoverDetection utility.
–Analyze – When “–Analyze” is specified as a parameter, the utility will load configuration file without copying log data. It assumes the log files have already been copied over. It does everything as default mode except copying log data. This option is useful if you already have the data in the local tool execution subdirectories and want to rerun the analysis.
–-Show -The utility after analyzing log data will display the results in the command console. Additionally, the results will be persisted to a JSON file in the results folder.
They look like this
Again, you need to press enter for the details to come through. The results are still saved to the Results folder as json as well so you won’t lose them.
When You Are Doing Something More Than Once ….
Automate it 🙂
When I saw the data that needed to be gathered for this tool, I quickly turned to PowerShell to enable me to easily gather the information. That has turned into a function which will
Download and extract the zip file from the Tiger Team GitHub repository
Identify all of the replicas for an Availability Group and dynamically create the configuration JSON file
Gather all of the required log files and place them in a specified data folder
Run the FailoverDetection.exe with any of the switches
Includes -Verbose, -Confirm, -Whatif switches so that you can easily see what is happening, be prompted to confirm before actions or see what would happen if you ran the function
You still need to press enter at the end though 🙁
and you will still need to install the “Microsoft Visual C++ Redistributable for Visual Studio 2017” and runt he strong names tool if needed
This function requires PowerShell version 5, the failovercluster module and and the dbatools module.
So you can see that if we run it without the -WhatIf switch it will
Create some directories
Download the zip file from the repo
Extract the zip file
Copy the required logs from each of the replicas to the data folder
Create the JSON configuration file
Run the executable
NOTE : – I have limited the gathering of the system event log to the last 2 days to limit the amount of time spent dealing with a large system log. I gather all of the SQL Error logs in the Error log path as that works for the first scenario I wrote this for, your mileage may vary.
So if we want to run the command we can remove the -WhatIf switch.
It can take a little while to run depending on the number of replicas, size of logs etc but once it has started running you can do other things.
It will require being run as an account with permissions to all of the folders specified and Windows and SQL permissions on all of the replicas in the Availability Group.
As you can see below it has gathered all of the results and placed them in the data folder.
The results can be found in the results folder.
If I have already run the tool, I can use the Analyze switch to save gathering the data again. I also use the AlreadyDownloaded switch as I do not need to download the zip file again.
You will then need to press enter to get the next lot of results.
Why Not Add This To dbatools?
I haven’t added this to dbatools (yet) because I wrote it in this way for a particular need and dbatools requires support for PowerShell V3 . I have, however created an issue added to this issue in the dbatools GitHub Repository (as this is how you to start the process of adding things to dbatools) so hopefully we can get it in there soon as well – in which case I will come back and update this post.
Following an upgrade to SQL Server the backup share had a number of backups, some from the old version and some from the newer version. I was asked if I had a script to be able to get the SQL Version from the backup file from all of the files in the backup share.
You can get more information about the backup using Read-DbaBackupHeader and as it is PowerShell it is easy to put this information into any format that you wish, maybe into a database with Write-DbaDataTable
— Rob He/Him robsewell@tech.lgbt & @counter.soci (@sqldbawithbeard) November 23, 2018
Support for PowerShell Core in dbatools is coming along very nicely. Following some hard work by the dbatools team and some PowerShell Community members like Mathias Jessen it is now possible to run a large number of dbatools commands in PowerShell Core running on Windows. There is still a little bit of work to do to get it working on Linux and Mac but I hear the team are working hard on that.
So the code example you see above was running on Windows 10 using PowerShell 6.1.1 the current latest stable release. This is excellent news and congratulations to all those working hard to make this work
If you want to try PowerShell Core, you can follow the instructions
Next week is the week when I used to dread looking at Twitter and especially the #PASSsummit hashtag, watching all of those folk having a great time in great sessions and doing all of the networking. Last year I was lucky enough to attend for the first time and take part in Chrissy LeMaire and CKs pre-con, this year I decided to add a US SQL Saturday to the list.
I shall be attending SQL Saturday Oregon in Portland and presenting about dbatools. I am really lucky, my amazing Dad, now that he is retired, happily plays Dad’s taxi (still!) and frequently drives me to the airport when I go away on trips. This is the first time he has really gone Wow! When I asked why, it is because Portland is where Grimm is filmed and he has watched the whole series and loved it!
I am looking forward to ticking off another thing on my list of things to do and presenting at a US SQL Saturday to add to 12 SQL Saturday cities in Europe and multiple other events around the world. If you are there come up and say hi.
I shall also be room monitoring for the PowerShell for the DBA session by Amy Herold t | b There are still volunteer slots available to help, sign up here if you are going. It is a lot of fun and an excellent way to give something back. SQL Saturdays take a lot of work to organise and organisers are always willing for some help. You will meet new people and have a lot of fun as well.
To get to Seattle I am going on the SQL Train. A whole bunch of data platform folk travelling on a train together up to Seattle for PASS Summit. It looks like it will be a lot of fun 🙂
Once in Seattle it is time for the week where others will not want to look at my twitter feed 🙂 A whole week at PASS Summit watching the sessions and networking (there may be beer involved) with our peers.
My one piece of advice is please don’t hide away in your hotel room for all of the time that sessions are not on. I know that dealing with a large amount of people can be tough and you might need some me time ( I will ) but there are a lot of activities both loud and quieter where you will have the opportunity to meet up and make new friends and contacts which may really pay back further down the line and you will have fun too.
On the Tuesday I am doing a pre-conference session Professional and Proficient PowerShell: From Writing Scripts to Developing Solutions. A whole day where I will be showing how to write PowerShell modules and all of the tips and tricks that I have learnt over the years.
Wednesday sees me embracing my inner André Kamman – unfortunately he is unable to make PASS Summit this year, so I will be delivering the session dbatools Powershell Library – The Complete Introduction in his place in room 2AB. I shall try to do it in as cool and relaxed a way as he does (It probably wont work I will get too excited 🙂 )
On Thursday I will be talking about dbachecks in room 6C which will also be streamed on PASSTv.
In between all of that, I shall be attending some networking events, visiting sessions, hanging out with people new and old and walking down the corridors, so if you see me, stop me and say hi, I’d love to meet you 🙂
(note – just before my sessions I may be too nervous to properly have a conversation)
Hopefully, I will meet you there and for the twitter folk stuck back in Europe I empathise 🙂
It’s been a few weeks since i have blogged as I have been busy with a lot of other things. One of which is preparing for my SQL Pass Summit pre-con which has lead to me improving the CI/CD for dbachecks by adding auto-creation of online documentation, which you can find at https://dbachecks.readthedocs.io or by running Get-Help with the -Online switch for any dbachecks command.
Get-Help Invoke-DbcCheck -Online
I will blog about how dbachecks uses Azure DevOps to do this another time
PSPowerHour
The PowerShell community members Michael T Lombardi and Warren Frame have created PSPowerHour. PSPowerHour is “like a virtual User Group, with a lightning-demo format, and room for non-PowerShell-specific content. Eight community members will give a demo each PowerHour.”
While watching the first group of sessions Andrew Wickham demonstrated using dbatools with trace flags and I thought that needs to be added to dbachecks so I created an issue. Anyone can do this to file improvements as well as bugs for members of the team to code.
Trace Flags
The previous release of dbachecks brought 2 new checks for traceflags. One for traceflags expected to be running and one for traceflags not expected to be running.
Once dbachecks is installed you can find the checks using
Get-DBcCheck
you can filter using the pattern parameter
Get-DBcCheck -Pattern traceflag
This will show you
the UniqueTag which will enable you to run only that check if you wish
AllTags which shows which tags will include that check
Config will show you which configuration items can be set for this check
The trace flag checks require the app.sqlinstance configuration which is the list of SQL instances that the checks will run against. You can also specify the instances as a parameter for Invoke-DbCheck as well.
The configuration for the expected traceflags is policy.traceflags.expected By default it is set to null. You can see what configuration it has using
Get-DBcConfig policy.traceflags.expected
So if you want to check that there are no trace flags running, then you can run
[-] Expected Trace Flags 1117 exist on sql0 593ms
Expected 1117 to be found in collection @(), because We expect that Trace Flag 1117 will be set on sql0, but it was not found.
So we have a failing test. We need to fix that. We can use dbatools
If you just need to see what trace flags are enabled you can use
Get-DbaTraceFlag -SqlInstance $instance
Reset the configuration for the expected trace flag to an empty array and then set the configuration for traceflags we do not expect to be running to 1117
and then run the trace flags not expected to be running check with
Invoke-DbcCheck -Check TraceFlagsNotExpected
It will fail as 1117 is still running
and give the message
[-] Expected Trace Flags 1117 to not exist on sql0 321ms
Expected 1117 to not be found in collection 1117, because We expect that Trace Flag 1117 will not be set on sql0, but it was found.
So to resolve this failing check we need to disable the trace flag and we can do that with dbatools using