Using the same Azure DevOps build steps for Terraform with different Pipelines with Task Groups to build an Azure Linux SQL VM

In my last post I showed how to build an Azure DevOps Pipeline for a Terraform build of an Azure SQLDB. This will take the terraform code and build the required infrastructure.

The plan all along has been to enable me to build different environments depending on the requirement. Obviously I can repeat the steps from the last post for a new repository containing a Terraform code for a different environment but

If you are going to do something more than once Automate It

who first said this? Anyone know?

The build steps for building the Terraform are the same each time (if I keep a standard folder and naming structure) so it would be much more beneficial if I could keep them in a single place and any alterations to the process only need to be made in the one place 🙂

Task Groups

Azure DevOps has task groups. On the Microsoft Docs web-page they are described as


task group allows you to encapsulate a sequence of tasks, already defined in a build or a release pipeline, into a single reusable task that can be added to a build or release pipeline, just like any other tas


https://docs.microsoft.com/en-us/azure/devops/pipelines/library/task-groups?view=azure-devops

If you are doing this with a more complicated existing build pipeline it is important that you read the Before You Create A Task Group on the docs page. This will save you time when trying to understand why variables are not available (Another grey hair on my beard!)

Creating A Task Group

Here’s the thing, creating a task group is so easy it should be the default way you create Azure DevOps Pipelines. Let me walk you through it

I will use the Build Pipeline from the previous post. Click edit from the build page

Then CTRL and click to select all of the steps

Right Click and theres a Create Task Group button to click !

You can see that it has helpfully added the values for the parameters it requires for the location, Storage Account and the Resource Group.

Remember the grey beard hair above? We need to change those values to use the variables that we will add to the Build Pipeline using

$(VariableName)

Once you have done that click Create

This will also alter the current Build Pipeline to use the Task Group. Now we have a Task Group that we can use in any build pipeline in this project.

Using the Task Group with a new Build Pipeline to build an Azure Linux SQL VM

Lets re-use the build steps to create an Azure SQL Linux VM. First I created a new GitHub Repository for my Terraform code. Using the docs I created the Terraform to create a resource group, a Linux SQL VM, a virtual network, a subnet, a NIC for the VM, a public IP for the VM, a netwwork security group with two rules, one for SQL and one for SSH. It will look like this

The next step is to choose the repository

again we are going to select Empty job (although the next post will be about the Configuration as Code 🙂

As before we will name the Build Pipeline and the Agent Job Step and click the + to add a new task. This time we will search for the Task Group name that we created

I need to add in the variables from the variable.tf in the code and also for the Task Group

and when I click save and queue

It runs for less than 7 minutes

and when I look in the Azure portal

and I can connect in Azure Data Studio

Altering The Task Group

You can find the Task Groups under Pipelines in your Azure DevOps project

Click on the Task Group that you have created and then you can alter, edit it if required and click save

This will warn you that any changes will affect all pipelines and task groups that are using this task group. To find out what will be affected click on references


which will show you what will be affected.

Now I can run the same build steps for any Build Pipeline and alter them all in a single place using Task Groups simplifying the administration of the Build Pipelines.

The next post will show how to use Azure DevOps templates to use the same build steps across many projects and build pipelines and will build a simple AKS cluster

The first post showed how to build an Azure SQLDB with Terraform using VS Code

The second post showed how to use Azure DevOps Task Groups to use the same build steps in multiple pipelines and build an Azure Linux SQL Server VM

Happy Automating!

Adding a Folder of Scripts to GitHub with Azure Data Studio

In my last post I showed how to add a folder of scripts to GitHub using Visual Studio Code.

You can do it with Azure Data Studio as well. It’s exactly the same steps!

The blog post could end here but read on for some screen shots 😉

Follow the previous post for details of setting up a new GitHub account

Create a repository in Github


Open the folder in Azure Data Studio with CTRL K CTRL O (Or File –> Open Folder)

Click on the Source Control icon or CTRL + SHIFT + G and then Initialize Repository

Choose the folder

Write a commit message

Say yes to the prompt. Press CTRL + ‘ to open the terminal

Navigate to the scripts folder. (I have a PSDrive set up to my Git folder)

Set-Location GIT:\ADS-Scripts\

and copy the code from the GitHub page after “…or push an existing repository from the command line”

and run it

and there are your scripts in GitHub

Make some changes to a script and it will go muddy brown

and then write a commit message. If you click on the file name in the scource control tab then you can see the changes that have been made, that are not currently tracked

Commit the change with CTRL + ENTER and then click the roundy-roundy icon (seriously anyone know its name ?) click yes on the prompt and your changes are in GitHub as well 🙂

Realistically, you can use the previous post to do this with Azure Data Studio as it is built on top of Visual Studio Code but I thought it was worth showing the steps in Azure Data Studio.

Happy Source Controlling

Persisting databases with named volumes on Windows with docker compose

With all things containers I refer to my good friend Andrew Pruski. Known as dbafromthecold on twitter he blogs at https://dbafromthecold.com

I was reading his latest blog post Using docker named volumes to persist databases in SQL Server and decided to give it a try.

His instructions worked perfectly and I thought I would try them using a docker-compose file as I like the ease of spinning up containers with them.

I created a docker-compose file like this which will map my backup folder on my Windows 10 laptop to a directory on the container and two more folders to the system folders on the container in the same way as Andrew has in his blog.

version: '3.7'

services:
    2019-CTP23:
        image: mcr.microsoft.com/mssql/server:2019-CTP2.3-ubuntu
        ports:  
          - "15591:1433"
          - "5022:5022"
        environment:
          SA_PASSWORD: "Password0!"
          ACCEPT_EULA: "Y"
        volumes: 
          - C:\MSSQL\BACKUP\KEEP:/var/opt/mssql/backups
          - C:\MSSQL\DockerFiles\datafiles:/var/opt/sqlserver
          - C:\MSSQL\DockerFiles\system:/var/opt/mssql

and then from the directory I ran

docker-compose up -d

This will build the containers as defined in the docker-compose file. The -d runs the container in the background. This was the result.

UPDATE – 2019-03-27

I have no idea why, but today it has worked as expected using the above docker-compose file. I had tried this a couple of times, restarted docker and restarted my laptop and was consistently getting the results below – however today it has worked

So feel free to carry on reading, it’s a fun story and it shows how you can persist the databases in a new container but the above docker-compose has worked!

The command completed successfully but as you can see on the left the container is red because it is not running. (I am using the Docker Explorer extension for Visual Studio C

I inspected the logs from the container using

docker logs ctp23_2019-CTP23_1

which returned

This is an evaluation version. There are [153] days left in the evaluation period.
This program has encountered a fatal error and cannot continue running at Tue Mar 26 19:40:35 20
19
The following diagnostic information is available:
Reason: 0x00000006 Status: 0x40000015 Message: Kernel bug check Address: 0x6b643120
Parameters: 0x10861f680
Stacktrace: 000000006b72d63f 000000006b64317b 000000006b6305ca
000000006b63ee02 000000006b72b83a 000000006b72a29d
000000006b769c02 000000006b881000 000000006b894000
000000006b89c000 0000000000000001
Process: 7 – sqlservr
Thread: 11 (application thread 0x4)
Instance Id: e01b154f-7986-42c6-ae13-c7d34b8b257d
Crash Id: 8cbb1c22-a8d6-4fad-bf8f-01c6aa5389b7
Build stamp: 0e53295d0e1704ae5b221538dd6e2322cd46134e0cc32be49c887ca84cdb8c10
Distribution: Ubuntu 16.04.6 LTS
Processors: 2
Total Memory: 4906205184 bytes
Timestamp: Tue Mar 26 19:40:35 2019
Ubuntu 16.04.6 LTS
Capturing core dump and information to /var/opt/mssql/log…
dmesg: read kernel buffer failed: Operation not permitted
No journal files were found.
No journal files were found.
Attempting to capture a dump with paldumper
WARNING: Capture attempt failure detected
Attempting to capture a filtered dump with paldumper
WARNING: Attempt to capture dump failed. Reference /var/opt/mssql/log/core.sqlservr.7.temp/log/
paldumper-debug.log for details
Attempting to capture a dump with gdb
WARNING: Unable to capture crash dump with GDB. You may need to
allow ptrace debugging, enable the CAP_SYS_PTRACE capability, or
run as root.

which told me that …………. it hadn’t worked. So I removed the containers with

docker-compose down

I thought I would create the volumes ahead of time like Andrew’s blog had mentioned with

docker volume create mssqlsystem
docker volume create mssqluser

and then use the volume names in the docker-compose file mapped to the system folders in the container, this time the result was

ERROR: Named volume “mssqlsystem:/var/opt/sqlserver:rw” is used in service “2019-CTP23” but no declaration was found in the volumes section.

So that didnt work either 🙂

I decided to inspect the volume definition using

docker volume inspect mssqlsystem

I can see the mountpoint is /var/lib/docker/volumes/mssqlsystem/_data so I decided to try a docker-compose like this

version: '3.7'

services:
    2019-CTP23:
        image: mcr.microsoft.com/mssql/server:2019-CTP2.3-ubuntu
        ports:  
          - "15591:1433"
          - "5022:5022"
        environment:
          SA_PASSWORD: "Password0!"
          ACCEPT_EULA: "Y"
        volumes: 
          - C:\MSSQL\BACKUP\KEEP:/var/opt/mssql/backups
          - /var/lib/docker/volumes/mssqluser/_data:/var/opt/sqlserver
          - /var/lib/docker/volumes/mssqlsystem/_data:/var/opt/mssql

and then ran docker-compose up without the -d flag so that I could see all of the output

You can see in the output that the system database files are being moved. Thatlooks like it is working so I used CTRL + C to stop the container and return the terminal. I then ran docker-compose up -d and

I created a special database for Andrew.

I could then remove the container with

docker-compose down

To make sure there is nothing up my sleeve I altered the docker-compose file to use a different name and port but kept the volume definitions the same.

version: '3.7'

services:
    2019-CTP23-Mk1:
        image: mcr.microsoft.com/mssql/server:2019-CTP2.3-ubuntu
        ports:  
          - "15592:1433"
          - "5022:5022"
        environment:
          SA_PASSWORD: "Password0!"
          ACCEPT_EULA: "Y"
        volumes: 
          - C:\MSSQL\BACKUP\KEEP:/var/opt/mssql/backups
          - /var/lib/docker/volumes/mssqluser/_data:/var/opt/sqlserver
          - /var/lib/docker/volumes/mssqlsystem/_data:/var/opt/mssql

I ran docker-compose up -d again and connected to the new container and lo and behold the container is still there

So after doing this, I have learned that to persist the databases and to use docker-compose files I had to map the volume to the mountpoint of the docker volume. Except I haven’t, I have learned that sometimes weird things happen with Docker on my laptop!!

Whats a SQL Notebook in Azure Data Studio?

Azure Data Studio is a cross-platform database tool for data professionals using the Microsoft family of on-premises and cloud data platforms on Windows, MacOS, and Linux.

Recently Vicky Harp tweeted

By the way, you can watch a recording from SQLBits of Vicky’s session


So in the interest of learning about something new I decided to give it a try.

Install The Insiders Edition

Unlike Visual Studio Code which has a link to the insiders download on the front page, you will have to visit the GitHub repository for the links to download the insiders release of Azure Data Studio. Scroll down and you will see

Try out the latest insiders build from master:

See the change log for additional details of what’s in this release.

Once you have installed you can connect to an instance, right click and choose New Notebook or you can use File – New Notebook

Incidentally, I use the docker-compose file here to create the containers and I map C:\MSSQL\BACKUP\KEEP on my local machine (where my backups are) to /var/opt/mssql/backups on the containers on lines 10 and 17 of the docker-compose so change as required . If you want to follow along then put the ValidationResults.bak in the folder on your local machine.
The Create-Ag.ps1 shows the code and creates an AG with dbatools. But I digress!

Install Notebook Dependencies

Once you click New Notebook you will get a prompt to install the dependencies.

It will show its output

and take a few minutes to run

It took all but 11 minutes on my machine

#

Create a Notebook

OK, so now that we have the dependencies installed we can create a notebook. I decided to use the ValidationResults database that I use for my dbachecks demos and describe here. I need to restore it from my local folder that I have mapped as a volume to my container. Of course, I use dbatools for this 🙂

# U: sqladmin P: dbatools.IO
$cred = Get-Credential
$restoreDbaDatabaseSplat = @{
    SqlInstance = $sqlinstance1
    SqlCredential = $cred
    UseDestinationDefaultDirectories = $true
    Path = '/var/opt/mssql/backups/ValidationResults.bak'
}
Restore-DbaDatabase @restoreDbaDatabaseSplat

I had already got a connection saved to the instance in Azure Data Studio, you may need to create a new one using the new connection icon at the top left and filling in the details. The password is in the code above.



Now I can start with my notebook. I am faced with this



I click on text and provide an intro


Once I had written that and clicked out, I couldn’t see what to do straight away!

Then I saw the code and text buttons at the top 🙂 Right, lets get on with it 🙂 I hit the code button and paste in the T-SQL to reset the dates in the database to simulate dbachecks having been run this morning.


There’s a run cell button on the right and when I press it

Cool 🙂

If the SQL query has results then they are shown as well

This is fun and I can see plenty of uses for it. Go and have a play with SQL notebooks 🙂

Source Control

I used CTRL K, CTRL O to open a folder and saved my notebook in my local Presentations folder which is source controlled. When I opened the explorer CTRL + SHIFT + E I can see that the folder and the file are colour coded green and have a U next to them marking them as Untracked. I can also see that the source control icon has a 1 for the number of files with changes and in the bottom left that I am in the master branch.

If I click on the source control icon (or CTRL + SHIFT + G) I can see the files with the changes and can enter a commit message

I then press CTRL + ENTER to commit my change and get this pop-up


As I only have one file and it has all the changes for this commit I click yes. If I had changed more than one file and only wanted to commit a single one at a time I would hover my mouse over the file and click the + to stage my change.



If I make a further change to the notebook and save it, I can see that the source control provider recognises the change but this time the folder the file is in and the file are colour coded brown with an M to show that they have been modified.

Unlike Visual Studio Code, when you then click on the source control icon and click on the change it does not show the differences in the notebook although this works with SQL files.

When I have made all my changes and committed them with good commit messages


I can see that there are 3 local changes ready to be pushed to by remote repository (GitHub in this case) and 0 remote commits in this branch by looking at the bottom left

I can click on the “roundy roundy” icon (I dont know its proper name 😊) and synchronise my changes. This comes with a pop-up

Personally I never press OK, Don’t Show Again because I like the double check and to think “Is this really what I want to do right now”. Once I press OK my changes will be synched with the remote repository. Explaining this means that you can find the notebook I have used in my Presentations GitHub Repository which means that you can run the Notebook too using the docker-compose file here and the instructions further up in the post.