How to fork a GitHub repository and contribute to an open source project

I enjoying maintaining open source GitHub repositories such as dbachecks and ADSNotebook. I absolutely love it when people add more functionality to them.

To collaborate with a repository in GitHub you need to follow these steps

  • Fork the repository into your own GitHub
  • Clone the repository to your local machine
  • Create a new branch for your changes
  • Make some changes and commit them with useful messages
  • Push the changes to your repository
  • Create a Pull Request from your repository back to the original one

You will need to have git.exe available which you can download and install from https://git-scm.com/downloads if required

Fork the repository into your own GitHub

A fork is a copy of the original repository. This allows you to make changes without affecting the original project. It does not get updated when the original project gets updated (We will talk about that in the next post) This enables you to code a new feature or a bug fix, test it locally and make sure it is working.

Let’s take dbachecks as our example. Start by going to the project in GiHub. In this case the URL is https://github.com/sqlcollaborative/dbachecks You will see a Fork button at the top right of the page

When you click the button the repository is copied into your own GitHub account

The page will open at https://github.com/YOURGITHUBUSERNAME/NameOfRepository in this case https://github.com/SQLDBAWithABeard/dbachecks You will be able to see that it is a fork of the original repository at the top of the page

Clone the repository to your local machine

Forking the repository has created a remote repository stored on the GitHub servers. Now that the repository has been forked you need to clone it to your local machine to create a local repository so that you can start coding your amazing fix. When you have finished you can then sync it back to your remote repository ready for a Pull Request back to the original repository.

In your browser, at your remote repository that you just created (https://github.com/YOURGITHUBUSERNAME/NameOfRepository if you have closed the page) click on Clone or Download and then the icon to the right to copy the url

You can clone your repository in VS Code or Azure Data Studio by clicking F1 or CTRL + SHIFT + P in Windows or Linux and ⇧⌘P or F1 on a Mac

then start typing clone until you see Git:Clone and press enter or click

Paste in the URL that you just copied and click enter. A dialog will open asking you to select a folder. This is the parent directory where your local repository will be created. The clone will create a directory for your repository so you do not need to. I suggest that you use a folder called GitHub or something similar to place all of the repositories that you are going to clone and create.

When it has finished it will ask you if you wish to open the repository

if you click Open it will close anything that you have already got opened and open the folder. If you click Add to Workspace it will add the folder to the workspace and leave everything you already had open as it was and surprisingly clicking Open in New Window will open the folder in a new instance of Visual Studio Code or Azure Data Studio!

and you will also be able to see the local repository files on your computer

You can clone the repository at the command line if you wish by navigating to your local GitHub directory and running git clone TheURLYouCopied

Now your local repository has been created, it’s time to do your magic coding.

Create a new branch for your changes

It is a good idea to create a branch for your amazing new feature This enables you to work on coding for that feature in isolation. It has the added advantage that if you mess it right royally up, you can just delete that branch and start again with a new one!

To create a branch in VS Code or Azure Data Studio you can click on the branch name at the bottom left.

Or open the Command Palette and type Branch until you see Git: Create Branch

You will be prompted for a branch name

I like to choose a name that relates to the code that I am writing like configurable_engine or removeerroringexample You can see the name of the branch in the bottom left so that you always know which branch you are working on.

The icon shows that the branch is only local and hasn’t been pushed (published) to the remote repository yet

Make some changes and commit them with useful messages

Now you can start writing your code for your awesome new feature, bug fix or maybe just documentation improvement. Keep your commits small and give them useful commit messages that explain why you have made the change as the diff tooling will be able to show what change you have made

Write your code or change the documentation, save the file and in Visual Studio Code or Azure Data Studio you will see that the source control icon has a number on it

Clicking on the icon will show the files that have changes ready

You can write your commit message in the box and click CTRL + ENTER to commit your changes with a message

If you want to do this at the command line, you can use git status to see which files have changes

You will need to git add .or git add .\pathtofile to stage your changes ready for committing and then git commit -m 'Commit Message' to commit them

Notice that I did exactly what I just said not to do! A better commit message would have been So that people can find the guide to forking and creating a PR

Push the changes to your repository

You only have the changes that you have made in your local repository on your computer. Now you need to push those changes to Github your remote repository. You can click on the publish icon

You will get a pop-up asking you if you wish to stage your changes. I click Yes and never Always so that I can use this prompt as a sanity check that I am doing the right thing

At the command line you can push the branch, if you do that, you will have to tell git where the branch needs to go. If you just type git push it will helpfully tell you

fatal: The current branch AwesomeNewFeature has no upstream branch.
To push the current branch and set the remote as upstream, use

    git push --set-upstream origin AwesomeNewFeature

So you will need to use that command

You can see in the bottom left that the icon has changed

and if you read the output of the git push command you will see what the next step is also.

Create a Pull Request from your repository back to the original one

You can CTRL click the link in the git push output if you have pushed from the command line or if you visit either you repository or the original repository in your browser you will see that there is a Compare and Pull Request button

You click that and let GitHub do its magic

and it will create a Pull Request for you ready for you to fill in the required information, ask for reviewers and other options. Once you have done that you can click Create pull request and wait for the project maintainer to review it and (hopefully) accept it into their project

You can find the Pull Request that I created here https://github.com/sqlcollaborative/dbachecks/pull/720 and see how the rest of this blog post was created.

If you make more changes to the code in the same branch in your local repository and push them, they will automatically be added to this Pull Request whilst it is open. You can do this if the maintainer or reviewer asks for changes.

Shane has asked for a change

So I can go to my local repository in Azure Data Studio and make the requested change and save the file. If I look in the source control in Azure Data Studio I can again see there is a change waiting to be committed and if I click on the name of the file I can open the diff tool to see what the change was

Once I am happy with my change I can commit it again in the same way as before either in the editor or at the command line. The icon at the bottom will change to show that I have one commit in my local repository waiting to be pushed

To do the same thing at the command line I can type git status and see the same thing.

I can then push my change to my remote repository either in the GUI or by using git push

and it will automatically be added to the Pull Request as you can see

Now that the required changes for the review have been made, the review has been approved by Shane and the pull request is now ready to be merged. (You can also see that dbachecks runs some checks against the code when a Pull Request is made)

Many, many thanks to Shane b | t who helped with the writing of this post even whilst on a “no tech” holiday.

Go Ahead – Contribute to an Open Source Project

Hopefully you can now see how easy it is to create a fork of a GitHub repository, clone it to your own machine and contribute. There are many open source projects that you can contribute to.

You can use this process to contribute to the Microsoft Docs for example by clicking on the edit button on any page.

You can contribute other open source projects like

or go and find the the ones that you use and can help with.

Advertisements

Create a PowerShell Notebook for Azure Data Studio with PowerShell

The latest update to the ADSNotebook PowerShell module I blogged about here now enables the creation of PowerShell notebooks with PowerShell.

You can install the module with

Install-Module ADSNotebook

or if you have already installed it you can use

Update-Module ADSNotebook

In the latest release, there is an extra parameter for New-AdsWorkBook of -Type which will accept either SQL or PowerShell

Create a PowerShell Notebook with PowerShell Rob

OK!

Here is some code to create a PowerShell Notebook. First we will create some cells using New-AdsWorkBookCell including all the markdown to add images and links. You can find my notebooks which explain how to write the markdown for your notebooks in my GitHub Presentations Repository

$introCelltext = "# Welcome to my Auto Generated PowerShell Notebook

## dbatools
![image](https://user-images.githubusercontent.com/6729780/68845538-7afcd200-06c3-11ea-952e-e4fe72a68fc8.png)  

dbatools is an open-source PowerShell Module for administering SQL Servers.
You can read more about dbatools and find the documentation at [dbatools.io](dbatools.io)
"
$SecondCelltext = "### Installation
You can install dbatools from the PowerShell Gallery using `Install-Module dbatools`
"

$thirdcelltext = "Install-Module dbatools"

$fourthCelltext = "### Getting Help
You should always use `Get-Help` to fins out how to use dbatools (and any PowerShell) commands"

$fifthcelltext = "Get-Help Get-DbaDatabase"
$sixthCelltext = "Try a command now. get the name, owner and collation of the user databases on the local instance"
$seventhCellText = "Get-DbaDatabase -SqlInstance localhost -ExcludeSystem | Select Name, Owner, Collation"

$Intro = New-ADSWorkBookCell -Type Text -Text $introCelltext
$second = New-ADSWorkBookCell -Type Text -Text $SecondCelltext
$third = New-ADSWorkBookCell -Type Code -Text $thirdcelltext
$fourth = New-ADSWorkBookCell -Type Text -Text $fourthCelltext
$fifth = New-ADSWorkBookCell -Type Code -Text $fifthcelltext
$sixth = New-ADSWorkBookCell -Type Text -Text $sixthCelltext
$seventh = New-ADSWorkBookCell -Type Code -Text $seventhCellText

Then we will create a new workbook using those cells

$path = 'C:\temp\dbatools.ipynb'
New-ADSWorkBook -Path $path -cells $Intro,$second,$third,$fourth,$fifth,$sixth,$Seventh -Type PowerShell

Then, when that code is run we can open the Notebook and ta-da

And it is super quick to run as well

UPDATE – Tyler Leonhardt t from the PowerShell team asked

Challenge accepted, with extra meta, here is the PowerShell to create a PowerShell Notebook which will create a PowerShell Notebook!!

PowerShell Notebooks in Azure Data Studio

The latest release of the insiders edition of Azure Data Studio brings the first edition of PowerShell Notebooks!

You can download the latest insiders edition from the link above, it can be installed alongside the stable release.

To access many of the commands available use F1 to open the command palette (like many of my tips this also works in Visual Studio Code). You can then start typing to get the command that you want.

You can then hit enter with the command that you want highlighted, use the mouse or use the shortcut which is displayed to the right.

In a new notebook, you can click the drop down next to kernel and now you can see that PowerShell is available

When you choose the PowerShell kernel, you will get a prompt asking you to configure the Python installation

If you have Python already installed you can browse to the location that it is installed or you can install Python. In the bottom pane you will be able to see the progress of the installation.

When it has completed, you will see

You may also get a prompt asking if you would like to upgrade some packages

Again this will be displayed in the tasks pane

Adding PowerShell


To add PowerShell Code to the notebook click the Code button at the top of the file

or the one you can find by highlighting above or below a block

I did not have intellisense, but you can easily write your code in Azure Data Studio or Visual Studio Code and paste it in the block.

Interestingly Shawn Melton ( t ) did

This was because he had the PowerShell extension installed and I did not (I know !!)
If you find you dont have intellisense then install the PowerShell extension!

Clicking the play button (which is only visible when you hover the mouse over it) will run the code

You can clear the results from every code block using the clear results button at the top

Otherwise, you can save the results with the Notebook by saving it. This is the part that is missing from running PowerShell in the Markdown blocks in a SQL Notebook as I described here

I am looking forward to how this develops. You can find my sample PowerShell notebook (with the code results) here

PowerShell in SQL Notebooks in Azure Data Studio

I have done a lot of writing in the last few months but you see no blog posts! My wonderful friend Chrissy and I are writing “dbatools in a Month of Lunches” to be published by Manning. That has taken up a lot of my writing mojo. We have hit a little break whilst we have some reviews done ready for the MEAP (For everyone who asks, the answer is the unfulfilling ‘soon’) so it’s time for a blog post!

SQL Notebooks are cool

I have had a lot of fun with SQL Notebooks recently. I have presented a session about them at a couple of events this month DataGrillen and SQL Saturday Cork. Here is a little snippet

Yes, you can run PowerShell in a SQL Notebook in Azure Data Studio just by clicking a link in the markdown cell. This opens up a lot of excellent possibilities.

I have had several discussions about how SQL Notebooks can be used by SQL DBAs within their normal everyday roles. (Mainly because I don’t really understand what the sorcerers of data science do with notebooks!). I have helped clients to look at some of their processes and use SQL Notebooks to help with them. Creating Disaster Recovery or Change Run-books or Incident Response Templates or using them for product demonstrations. Of course, I needed to use PowerShell in that 🙂

I have really enjoyed working out how to run PowerShell in the markdown in a SQL Notebook in Azure Data Studio and I think Anthony the kubernetes magician did too!

OK enough magic puns lets talk about PowerShell in SQL Notebooks. You can read about how to create a SQL Notebook and run T-SQL queries here, (you no longer need the Insider Edition by the way)

PowerShell in Markdown!

First, before I go any further, I must say this. I was at the European PowerShell Conference when I was working this out and creating my sessions and I said the words

“Cool, I can click a link and run PowerShell, this is neat”

A Beardy fellow in Hannover

This stopped some red team friends of mine in their tracks and they said “Show me”. One of them was rubbing their hands with glee! You can imagine the sort of wicked, devious things that they were immediately considering doing.

Yes, it’s funny but also it carries a serious warning. Without understanding what it is doing, please don’t enable PowerShell to be run in a SQL Notebook that someone sent you in an email or you find on a GitHub. In the same way as you don’t open the word document attachment which will get a thousand million trillion pounddollars into your bank account or run code you copy from the internet on production without understanding what it does, this could be a very dangerous thing to do.

With that warning out of the way, there are loads of really useful and fantastic use cases for this. SQL Notebooks make great run-books or incident response recorders and PowerShell is an obvious tool for this. (If only we could save the PowerShell output in a SQL Notebook, this would be even better)

How on earth did you work this out?

Someone asked me how I worked it out. I didn’t! It began with Vicky Harp PM lead for the SQL Tools team at Microsoft

I then went and looked at Kevin Cunnane‘s notebook. Kevin is a member of the tools team working on Azure Data Studio. With SQL Notebooks, you can double click the markdown cell and see the code that is behind it. To understand how it is working, lets deviate a little.

Keyboard Shortcuts

IF you click the cog at the bottom left of Azure Data Studio and choose Keyboard Shortcuts

you can make Azure Data Studio (and Visual Studio Code) work exactly how you want it to. Typing in the top box will find a command and you can then set the shortcuts that you want to use to save yourself time.

This also enables you to see the command that is called when you use a keyboard shortcut. For example, you can see that for the focus terminal command it says workbench.action.terminal.focus.

It turns out that you can call this as a link in a Markdown document using HTML with <a href=""> and adding command: prior to the command text. When the link is clicked the command will run. Cool 🙂

For this to be able to work (you read the warning above?) you need to set the Notebook to be trusted by clicking this button.

This will allow any command to be run. Of course, people with beards will helpfully advise when this is required for a SQL Notebook. (Safe to say people attempting nefarious actions will try the same with your users)

Now that we know how to run an Azure Data Studio command using a link in a markdown cell the next step is to run a PowerShell command. I headed to the Visual Studio Code documentation and found

Send text from a keybinding
The workbench.action.terminal.sendSequence command can be used to send a specific sequence of text to the terminal, including escape sequence

That’s the command we need, however, we still need to craft the command so that it will work as a link. It needs to be converted into a URL.

I started by using this website https://www.url-encode-decode.com/ to do this. This is how you can check the code in other peoples notebook, use the decode capability.

Encoding Set-Location C:\dbachecks gives Set-Location+C%3A%5Cdbacheck`

So I can just put that code into the href link and bingo!

If only it was that easy!!

Some Replacing is required

The + needs to be replaced with a space or %20

You also need to double the \ and replace the %3A with a :
The " needs to be replaced with \u022, the ' with \u027, the curly braces won’t work unless you remove the %0D%0A. Got all that? Good!

Once you have written your PowerShell, encoded it, performed the replacements, you add \u000D at the end of the code to pass an enter to run the code and then place all of that into a link like this

<a href="command:workbench.action.terminal.sendSequence?%7B%22text%22%3A%22 PLACE THE ENCODED CODE HERE %22%7D">Link Text</a>

This means that if you want to add the PowerShell code to set a location and then list the files and folders in that location to a Markdown cell using PowerShell like this

Set-Location C:\dbachecks
Get-ChildItem

You would end up with a link like this

<a href="command:workbench.action.terminal.sendSequence?%7B%22text%22%3A%22 Set-Location C:%5C%5Cdbachecks \u000D Get-ChildItem \u000D %22%7D">Set Location and list files</a>

Doing something more than once?

I don’t want to remember that all of the time so I wrote a PowerShell function. You can find it on GitHub https://github.com/SQLDBAWithABeard/Functions/blob/master/Convert-ADSPowerShellForMarkdown.ps1

This will take a PowerShell command and turn it into a link that will work in an Azure Data Studio markdown. It’s not magic, it’s PowerShell. There is a –ToClipboard parameter which will copy the code to the clipboard ready for you to paste into the cell (On Windows machines only)

Giants

There are many uses for this but here’s one I think is cool.

The link below will go to a notebook, which will show how you the giants upon whose shoulders I stand

Glenn Berry,
Chrissy LeMaire,
André Kamman,
Gianluca Sartori

have enabled me to create a SQL Notebook with a link which will run some PowerShell to create a SQL Notebook which will have all of the Diagnostic Queries in it.

You could possibly use something like it for your incident response SQL Notebook.

It’s also cool that GitHub renders the notebook in a browser (You can’t run PowerShell or T-SQL from there though, you need Azure Data Studio!)

https://github.com/SQLDBAWithABeard/Presentations/blob/master/2019/Berlin%20SQL%20User%20Group/04%20-%20Glenn%20Berry%20Notebook.ipynb


Azure SQL Linux VM – configuring SQL, installing pwsh and connecting and interacting with dbatools

In my posts about using Azure Devops to build Azure resources with Terraform, I built a Linux SQL VM. I used the Terrafrom in this GitHub repository and created this

Connecting with MobaXterm

I had set the Network security rules to accept connections only from my static IP using variables in the Build Pipeline. I use MobaXterm as my SSH client. Its a free download. I click on sessions

Choose a SSH session and fill in the remote host address from the portal

fill in the password and

Configuring SQL

The next task is to configure the SQL installation. Following the instructions on the Microsoft docs site I run

enter the sa password and

Now to start SQL

Installing pwsh

Installing PowerShell Core (pwsh) is easy with snap

A couple of minutes of downloads and install

and pwsh is ready for use

Installing dbatools

To install dbatools from the Powershell Gallery simply run

This will prompt you to allow installing from an untrusted repository

and dbatools is ready to go

Connecting with Azure Data Studio

I can also connect with Azure Data Studio

and connect

Just a quick little post explaining what I did 🙂

Happy Linuxing!

Whats a SQL Notebook in Azure Data Studio?

Azure Data Studio is a cross-platform database tool for data professionals using the Microsoft family of on-premises and cloud data platforms on Windows, MacOS, and Linux.

Recently Vicky Harp tweeted

By the way, you can watch a recording from SQLBits of Vicky’s session


So in the interest of learning about something new I decided to give it a try.

Install The Insiders Edition

Unlike Visual Studio Code which has a link to the insiders download on the front page, you will have to visit the GitHub repository for the links to download the insiders release of Azure Data Studio. Scroll down and you will see

Try out the latest insiders build from master:

See the change log for additional details of what’s in this release.

Once you have installed you can connect to an instance, right click and choose New Notebook or you can use File – New Notebook

Incidentally, I use the docker-compose file here to create the containers and I map C:\MSSQL\BACKUP\KEEP on my local machine (where my backups are) to /var/opt/mssql/backups on the containers on lines 10 and 17 of the docker-compose so change as required . If you want to follow along then put the ValidationResults.bak in the folder on your local machine.
The Create-Ag.ps1 shows the code and creates an AG with dbatools. But I digress!

Install Notebook Dependencies

Once you click New Notebook you will get a prompt to install the dependencies.

It will show its output

and take a few minutes to run

It took all but 11 minutes on my machine

#

Create a Notebook

OK, so now that we have the dependencies installed we can create a notebook. I decided to use the ValidationResults database that I use for my dbachecks demos and describe here. I need to restore it from my local folder that I have mapped as a volume to my container. Of course, I use dbatools for this 🙂

I had already got a connection saved to the instance in Azure Data Studio, you may need to create a new one using the new connection icon at the top left and filling in the details. The password is in the code above.



Now I can start with my notebook. I am faced with this



I click on text and provide an intro


Once I had written that and clicked out, I couldn’t see what to do straight away!

Then I saw the code and text buttons at the top 🙂 Right, lets get on with it 🙂 I hit the code button and paste in the T-SQL to reset the dates in the database to simulate dbachecks having been run this morning.


There’s a run cell button on the right and when I press it

Cool 🙂

If the SQL query has results then they are shown as well

This is fun and I can see plenty of uses for it. Go and have a play with SQL notebooks 🙂

Source Control

I used CTRL K, CTRL O to open a folder and saved my notebook in my local Presentations folder which is source controlled. When I opened the explorer CTRL + SHIFT + E I can see that the folder and the file are colour coded green and have a U next to them marking them as Untracked. I can also see that the source control icon has a 1 for the number of files with changes and in the bottom left that I am in the master branch.

If I click on the source control icon (or CTRL + SHIFT + G) I can see the files with the changes and can enter a commit message

I then press CTRL + ENTER to commit my change and get this pop-up


As I only have one file and it has all the changes for this commit I click yes. If I had changed more than one file and only wanted to commit a single one at a time I would hover my mouse over the file and click the + to stage my change.



If I make a further change to the notebook and save it, I can see that the source control provider recognises the change but this time the folder the file is in and the file are colour coded brown with an M to show that they have been modified.

Unlike Visual Studio Code, when you then click on the source control icon and click on the change it does not show the differences in the notebook although this works with SQL files.

When I have made all my changes and committed them with good commit messages


I can see that there are 3 local changes ready to be pushed to by remote repository (GitHub in this case) and 0 remote commits in this branch by looking at the bottom left

I can click on the “roundy roundy” icon (I dont know its proper name 😊) and synchronise my changes. This comes with a pop-up

Personally I never press OK, Don’t Show Again because I like the double check and to think “Is this really what I want to do right now”. Once I press OK my changes will be synched with the remote repository. Explaining this means that you can find the notebook I have used in my Presentations GitHub Repository which means that you can run the Notebook too using the docker-compose file here and the instructions further up in the post.

Using Docker to run Integration Tests for dbachecks

My wonderful friend André Kamman wrote a fantastic blog post this week SQL Server Container Instances via Cloudshell about how he uses containers in Azure to test code against different versions of SQL Server.

It reminded me that I do something very similar to test dbachecks code changes. I thought this might make a good blog post. I will talk through how I do this locally as I merge a PR from another great friend Cláudio Silva who has added agent job history checks.

GitHub PR VS Code Extension

I use the GitHub Pull Requests extension for VS Code to work with pull requests for dbachecks. This enables me to see all of the information about the Pull Request, merge it, review it, comment on it all from VS Code

I can also see which files have been changed and which changes have been made

Once I am ready to test the pull request I perform a checkout using the extension

This will update all of the files in my local repository with all of the changes in this pull request

You can see at the bottom left that the branch changes from development to the name of the PR.

Running The Unit Tests

The first thing that I do is to run the Unit Tests for the module. These will test that the code is following all of the guidelines that we require and that the tests are formatted in the correct way for the Power Bi to parse. I have blogged about this here and here and we use this Pester in our CI process in Azure DevOps which I described here.

I navigate to the root of the dbachecks repository on my local machine and run

and after about a minute

Thank you Cláudio, the code has passed the tests 😉

Running Some Integration Tests

The difference between Unit tests and Integration tests in a nutshell is that the Unit tests are testing that the code is doing what is expected without any other external influences whilst the Integration tests are checking that the code is doing what is expected when running on an actual environment. In this scenario we know that the code is doing what is expected but we want to check what it does when it runs against a SQL Server and even when it runs against multiple SQL Servers of different versions.

Multiple Versions of SQL Server

As I have described before my friend and former colleague Andrew Pruski b | t has many resources for running SQL in containers. This means that I can quickly and easily create fresh uncontaminated instances of SQL 2012, 2014, 2016 and 2017 really quickly.

I can create 4 instances of different versions of SQL in (a tad over) 1 minute. How about you?

Imagine how long it would take to run the installers for 4 versions of SQL and the pain you would have trying to uninstall them and make sure everything is ‘clean’. Even images that have been sysprep’d won’t be done in 1 minute.

Docker Compose Up ?

So what is this magic command that has enabled me to do this? docker compose uses a YAML file to define multi-container applications. This means that with a file called docker-compose.yml like thish

and in that directory just run

and 4 SQL containers are available to you. You can interact with them via SSMS if you wish with localhost comma PORTNUMBER. The port numbers in the above file are 15586, 15587,15588 and 15589

Now it must be noted, as I describe here that first I pulled the images to my laptop. The first time you run docker compose will take significantly longer if you haven’t pulled the images already (pulling the images will take quite a while depending on your broadband speed)

Credential

The next thing is to save a credential to make it easier to automate. I use the method described by my PowerShell friend Jaap Brasser here. I run this code

and then I can create a credential object using

Check The Connections

I ensure a clean session by removing the dbatools and dbachecks modules and then import the local version of dbachecks and set some variables

Now I can start to run my Integration tests. First reset the dbachecks configuration and set some configuration values

Then I will run the dbachecks connectivity checks and save the results to a variable without showing any output

I can then use Pester to check that dbachecks has worked as expected by testing if the failedcount property returned is 0.

What is the Unit Test for this PR?

Next I think about what we need to be testing for the this PR. The Unit tests will help us.

Choose some Integration Tests

This check is checking the Agent job history settings and the unit tests are

  • It “Passes Check Correctly with Maximum History Rows disabled (-1)”
  • It “Fails Check Correctly with Maximum History Rows disabled (-1) but configured value is 1000”
  • It “Passes Check Correctly with Maximum History Rows being 10000”
  • It “Fails Check Correctly with Maximum History Rows being less than 10000”
  • It “Passes Check Correctly with Maximum History Rows per job being 100”
  • It “Fails Check Correctly with Maximum History Rows per job being less than 100”

So we will check the same things on real actual SQL Servers. First though we need to start the SQL Server Agent as it is not started by default. We can do this as follows

Unfortunately, the agent service wont start in the SQL 2014 container so I cant run agent integration tests for that container but it’s better than no integration tests.

This is What We Will Test

So we want to test if the check will pass with default settings. In general, dbachecks will pass for default instance, agent or database settings values by default.

We also want the check to fail if the configured value for dbachecks is set to default but the value has been set on the instance.

We want the check to pass if the configured value for the dbachecks configuration is set and the instance (agent, database) setting matches it.

If You Are Doing Something More Than Once ……

Let’s automate that. We are going to be repeatedly running those three tests for each setting that we are running integration tests for. I have created 3 functions for this again checking that FailedCount or Passed Count is 0 depending on the test.

Now I can use those functions inside a loop in my Integration Pester Test

Write Some Integration Tests

So for this new test I have added a value to the TestingTheChecks array then I can test my checks. The default check I can check like this

Now I need to change the configurations so that they do not match the defaults and run the checks again

Next we have to change the instance settings so that they match the dbachecks configuration and run the checks and test that they all pass.

We will (of course) use dbatools for this. First we need to find the command that we need

and then work out how to use it

There is an example that does exactly what we want 🙂 So we can run this.

Run the Integration Tests

And then we will check that all of the checks are passing and failing as expected

Integration Test For Error Log Counts

There is another integration test there for the error logs count. This works in the same way. Here is the code

Merge the Changes

So with all the tests passing I can merge the PR into the development branch and Azure DevOps will start a build. Ultimately, I would like to add the integration to the build as well following André‘s blog post but for now I used the GitHub Pull Request extension to merge the pull request into development which started a build and then merged that into master which signed the code and deployed it to the PowerShell gallery as you can see here and the result is

https://www.powershellgallery.com/packages/dbachecks/1.1.164

Running Windows and Linux SQL Containers together

Just for fun I decided to spend Christmas Eve getting Windows and Linux SQL containers running together.

WARNING

This is NOT a production ready solution, in fact I would not even recommend that you try it.
I definitely wouldn’t recommend it on any machine with anything useful on it that you want to use again.
We will be using a re-compiled dockerd.exe created by someone else and you know the rules about downloading things from the internet don’t you? and trusting unknown unverified people?

Maybe you can try this in an Azure VM or somewhere else safe.

Anyway, with that in mind, lets go.

Linux Containers On Windows

You can run Linux containers on Windows in Docker as follows. You need to be running the latest Docker for Windows.

Right click on the whale in the task bar and select Settings

Notice that I am running Windows Containers as there is a switch to Linux containers option. If you see Switch to Windows containers then click that first.

Click on Daemon and then tick the experimental features tick box and press apply.

Docker will restart and you can now run Linux containers alongside windows containers.

So you you can pull the Ubuntu container with

and then you can run it with

There you go one Linux container running 🙂
A good resource for learning bash for SQL Server DBAs is Kellyn Pot’Vin-Gorman b | t series on Simple Talk

Type Exit to get out of the container and to remove it


Running SQL Linux Containers On Windows

So can we run SQL Containers ?

Well, we can pull the image successfully.

If you try that without the experimental features enabled you will get this error.

image operating system “linux” cannot be used on this platform

So you would think that what you can do is to use the code from Andrew ‘dbafromthecold’ Pruski’s b | t excellent container series

When you do, the command will finish successfully but the container won’t be started (as can been seen by the red dot in the docker explorer).

If you look at the logs for the container. (I am lazy, I right click on the container and choose show logs in VS Code 🙂 ) you will see

sqlservr: This program requires a machine with at least 2000 megabytes of memory.
/opt/mssql/bin/sqlservr: This program requires a machine with at least 2000 megabytes of memory.

Now, if you are running Linux containers, this is an easy fix. All you have to do is to right click on the whale in the taskbar, choose Settings, Advanced and move the slider for the Memory and click apply.

But in Windows containers that option is not available.

If you go a-googling you will find that Shawn Melton created an issue for this many months ago, which gets referenced by this issue for the guest compute service, which references this PR in moby. But as this hasn’t been merged into master yet it is not available. I got bored of waiting for this and decided to look a bit deeper today.

Get It Working Just For Fun

So, you read the warning at the top?

Now let’s get it working. I take zero credit here. All of the work was done by Brian Weeteling b | G in this post

So you can follow Brians examples and check out the source code and compile it as he says or you can download the exe that he has made available (remember the warning?)

Stop Docker for Windows, and with the file downloaded and unzipped, open an admin PowerShell and navigate to the directory the dockerd.exe file is and run

You will get an output like this and it will keep going for a while.

Leave this window open whilst you are using Docker like this. Once you see

Then open a new PowerShell window or VS Code. You will need to run it as admin. I ran

to see if it was up and available.

I also had to create a bootx64.efi file at C:\Program Files\Linux Containers which I did by copying and renaming the kernel file in that folder.

Now I can use a docker-compose file to create 5 containers. Four will be Windows containers from Andrews Docker hub repositories or Microsoft’s Docker Hub for SQL 2012, SQL 2014, SQL 2016, and SQL 2017 and one will be the latest Ubuntu SQL 2019 CTP 2.2 image. Note that you have to use version 2.4 of docker compose as the platform tag is not available yet in any later version, although it is coming to 3.7 soon.

Save this code as docker-compose.yml and navigate to the directory in an admin PowerShell or VS Code and run

and now I have Windows and Linux SQL containers running together. This means that I can test some code against all versions of SQL from 2012 to 2019 easily in containers 🙂

So that is just a bit of fun.

To return to the normal Docker, simply CTRL and C the admin PowerShell you ran .\dockerd.exe in and you will see the logs showing it shutting down.

You will then be able to start Docker For Windows as usual.

I look forward to the time, hopefully early next year when all of the relevant PR’s have been merged and this is available in Docker for Windows.

Happy Automating 🙂

Getting SQL Services, Starting, Stopping and Restarting them with dbatools

There was a question in the #dbatools slack channel 

dbatools question

Getting dbatools

dbatools enables you to administer SQL Server with PowerShell. To get it simply open PowerShell run

You can find more details on the web-site

Finding the Command

To find a command you can use the dbatools command Find-DbaCommand
For commands for service run

There are a whole bundle returned

find services.png

This is how you can find any dbatools command. There is also a -Tag parameter on Find-DbaCommand.

This returns

find services tag.png

How to use any PowerShell command

Always always start with Get-Help

get help.png

This will show you all the information about the command including examples 🙂

help examples.png

All of these commands below require that the account running the PowerShell is a Local Admin on the host.

One Host Many Hosts

Now I have used just one host for all of the examples on this page. Do not be fooled, you can always use an array of hosts wherever I have $ComputerName you can set it to as many hosts as you like

You can even get those names form a database, Excel sheet, CMS.

Getting the Services

So to get the services on a machine run

getting servies 1.png

You can output into a table format.


I will use the alias ft for this in some of the examples, that is fine for the command line but use the full command name in any code that you write that other people use

services table.png
You have an object returned so you can output to anything if you want – CSV, JSON, text file, email, azure storage, database, the world is your oyster.

Getting the Services for one instance

The Get-DbaService command has a number of parameters. There is an InstanceName parameter enabling you to get only the services for one instance. If we just want the default instance services

default instances.png

Just the MIRROR instance services

mirror instances.png

Getting just the Engine or Agent services

You can also use the -Type parameter to get only services of a particular type. You can get one of the following: “Agent”,”Browser”,”Engine”,”FullText”,”SSAS”,”SSIS”,”SSRS”, “PolyBase”

So to get only the Agent Services

agent services.png
You can combine the InstanceName and the Type parameters to get say only the default instance engine service
default engine service.png

Starting and stopping and restarting services

You can use Start-DbaService and Stop-DbaService to start and stop the services. They each have ComputerName, InstanceName and Type parameters like Get-DbaService.

So if after running

you find that all services are stopped

all stopped.png

Start All the Services

You can run

and start them all

start them all.png

The full text service was started with the engine service which is why it gave a warning. You can see this if you have all of the services stopped and just want to start the engine services with the type parameter.

all stopped - start engine.png

If you just want to start the Agent services, you can use

start agent.png

You can start just the services for one instance

start instance services.png

Stopping the services

Stopping the services works in the same way. Lets stop the MIRROR instance services we have just started. This will stop the services for an instance

stopping instance services.png

We can stop them by type as well, although this will show an extra requirement. If we start our MIRROR instance services again and then try to stop just the engine type.

cant stop.png

You will get a warning due to the dependant services

WARNING: [10:31:02][Update-ServiceStatus] (MSSQL$MIRROR on SQL0) The attempt to stop the service returned the following error: The service cannot be stopped because other services that are running are dependent on it.
WARNING: [10:31:02][Update-ServiceStatus] (MSSQL$MIRROR on SQL0) Run the command with ‘-Force’ switch to force the restart of a dependent SQL Agent

So all you have to do is use the force Luke (or whatever your name is!)

Use the force.png

You can also stop the services for an entire host, again you will need the Force parameter.

stop all of them.png

Restarting Services

It will come as no surprise by now to learn that Restart-DbaService follows the same pattern. It also has ComputerName, InstanceName and Type parameters like Get-DbaService, Start-DbaService and Stop-DbaService (Consistency is great, It’s one of the things that is being worked on towards 1.0 as you can see in the Bill of Health)

Again you will need the -Force for dependant services, you can restart all of the services on a host with

restart tehm all.png

or just the services for an instance

restart instance.png

or just the Agent Services

restart agent.png

Doing a bit of coding

Now none of that answers @g-kannan’s question. Restarting only services with a certain service account.

With PowerShell you can pipe commands together so that the results of the first command are piped into the second. So we can get all of the engine services on a host for an instance with Get-DbaService and start them with Start-DbaService like this

start.png

or get all of the engine services for an instance on a host and stop them

stop one isntance.png

or maybe you want to get all of the service that have stopped

stopped services.png

You can do the same thing with syntax that may make more sense to you if you are used to T-SQL as follows

T SQL syntax powershell.png

and then start only those services you could do

start the stopped ones.png

(note – you would just use Start-DbaService in this case as it wont start services that are already started!)

only one service.png

Come On Rob! Answer the question!

So now that you know a lot more about these commands, you can restart only the services using a particular service account by using Get-DbaService to get the services

services by start name.png

and then once you know that you have the right ‘query’ you can pipe that to Restart-DbaService (Like making sure your SELECT query returns the correct rows for your WHERE clause before running the DELETE or UPDATE)

restarting only one.png

Happy Automating !

SQL Server Availability Group FailoverDetection Utility PowerShell Function Improvements – Named Instances, Archiving Data, Speed

In my last post I wrote about a new function for gathering the data and running the FailoverDetection utility by the Tiger Team to analyse availability group failovers. I have updated it following some comments and using it for a day.

Don’t forget the named instances Rob!

Michael Karpenko wrote a comment pointing out that I had not supported named instances, which was correct as it had not been written for that. Thank you Michael 🙂 I have updated the code to deal with named instances.

Confusing results

I also realised as we started testing the code that if you had run the code once and then ran it again against a different availability group the tool does not clear out the data folder that it uses so you can get confusing results.

In the image below I had looked at the default instance and then a MIRROR named instance. As you can see the results json on the left shows the default instance SQLClusterAG while the one on the right shows both the SQLClusterAG and the MirrrAG instance results.

duplicate results.png

This is not so useful if you don’t notice this at first with the expanded json!! Now you may in this situation want to see the combined results from all of the availability groups on one cluster. You could gather all of the data from each instance and then add it to the data folder easily enough.

By cleaning out the data folder before running the utility the results are as expected.

duplicate results fixed.png

Archive the data for historical analysis

One of the production DBAs pointed out that having gathered the information, it would be useful to hold it for better analysis of repeated issues. I have added an archiving step so that when the tools runs, if there is already data in the data gathering folder, it will copy that to an archive folder and name it with the date and time that the cluster log was created as this is a good estimation of when the analysis was performed. If an archive folder location is not provided it will create an archive folder in the data folder. This is not an ideal solution though, as the utility will copy all of the files and folders from there to its own location so it is better to define an archive folder in the parameters.

Get-Eventlog is sloooooooooooow

I was running the tools and noticed it sat running the system event log task for a long long time. I ran some tests using a variation of the dbatools prompt.

This will show in the prompt how long it took to run the previous statement .

speed.png

In the image above (which you can click to get a larger version as with all images on this blog) you can see that it took 18ms to set the date variable, FOUR MINUTES and FORTY THREE seconds to get the system log in the last 2 days using Get-EventLog and 29.1 seconds using Get-WinEvent and a FilterHashtable.

Getting the function

This function requires PowerShell version 5 and the dbatools module.

You can get the function from my GitHub Functions Repository here (at the moment – will be adding to dbatools see below)

Load the function by either running the code or if you have it saved as a file dot-sourcing it.

. .\Invoke-SqlFailOverDetection.ps1

There are two .’s with a space in between and then a \ without a space. so Dot Space Dot Whack path to file.

The next thing you should do is what you should always do with a new PowerShell function, look at the help.

Get-Help Invoke-SqlFailOverDetection -Detailed

You will find plenty of examples to get you going and explanations of all of the parameters and more info on my previous post.

Happy Automating!