Now I can use this yaml as configuration as code for my Build Pipeline 🙂 It can be used from any Azure DevOps project. Once you start looking at the code and the documentation for the yaml schema you can begin to write your pipelines as YAML, but sometimes it is easier to just create build pipeline or even just a job step in the browser and click the view yaml button!
Create an AKS Cluster with a SQL 2019 container using Terraform and Build templates
This time I am going to choose the Configuration as code template
I am going to give it a name and it will show me that it needs the path to the yaml file containing the build definition in the current repository.
Clicking the 3 ellipses will pop-up a file chooser and I pick the build.yaml file
The build.yaml file looks like this. The name is the USER/Repository Name and the endpoint is the name of the endpoint for the GitHub service connection in Azure DevOps. The template value is the name of the build yaml file @ the name given for the repository value.
You can find (and change) your GitHub service connection name by clicking on the cog bottom left in Azure DevOps and clicking service connections
I still need to create my variables for my Terraform template (perhaps I can now just leave those in my code?) For the AKS Cluster build right now I have to add presentation, location, ResourceGroupName, AgentPoolName, ServiceName, VMSize, agent_count
Then I click save and queue and the job starts running
If I want to edit the pipeline it looks a little different
The variables and triggers can be found under the 3 ellipses on the top right
It also defaults the trigger to automatic deployment.
It takes a bit longer to build
and when I get the Terraform code wrong and the build fails, I can just alter the code, commit it, push and a new build will start and the Terraform will work out what is built and what needs to be built!
but eventually the job finishes successfully
and the resources are built
and in Visual Studio Code with the Kubernetes extension installed I can connect to the cluster by clicking the 3 ellipses and Add Existing Cluster
I choose Azure Kubernetes Services and click next
Choose my subscription and then add the cluster
and then I can explore my cluster
I can also see the dashboard by right clicking on the cluster name and Open Dashboard
Right clicking on the service name and choosing describe
shows the external IP address, which I can put into Azure Data Studio and connect to my container
So I now I can source control my Build Job Steps and hold them in a central repository. I can make use of them in any project. This gives me much more control and saves me from repeating myself repeating myself. The disadvantage is that there is no handy warning when I change the underlying Build Repository that I will be affecting other Build Pipelines and there is no easy method to see which Build Pipelines are dependent on the build yaml file
In my last post I showed how to build an Azure DevOps Pipeline for a Terraform build of an Azure SQLDB. This will take the terraform code and build the required infrastructure.
The plan all along has been to enable me to build different environments depending on the requirement. Obviously I can repeat the steps from the last post for a new repository containing a Terraform code for a different environment but
If you are going to do something more than once Automate It
who first said this? Anyone know?
The build steps for building the Terraform are the same each time (if I keep a standard folder and naming structure) so it would be much more beneficial if I could keep them in a single place and any alterations to the process only need to be made in the one place 🙂
A task group allows you to encapsulate a sequence of tasks, already defined in a build or a release pipeline, into a single reusable task that can be added to a build or release pipeline, just like any other tas
Here’s the thing, creating a task group is so easy it should be the default way you create Azure DevOps Pipelines. Let me walk you through it
I will use the Build Pipeline from the previous post. Click edit from the build page
Then CTRL and click to select all of the steps
Right Click and theres a Create Task Group button to click !
You can see that it has helpfully added the values for the parameters it requires for the location, Storage Account and the Resource Group.
Remember the grey beard hair above? We need to change those values to use the variables that we will add to the Build Pipeline using
Once you have done that click Create
This will also alter the current Build Pipeline to use the Task Group. Now we have a Task Group that we can use in any build pipeline in this project.
Using the Task Group with a new Build Pipeline to build an Azure Linux SQL VM
Lets re-use the build steps to create an Azure SQL Linux VM. First I created a new GitHub Repository for my Terraform code. Using the docs I created the Terraform to create a resource group, a Linux SQL VM, a virtual network, a subnet, a NIC for the VM, a public IP for the VM, a netwwork security group with two rules, one for SQL and one for SSH. It will look like this
The next step is to choose the repository
again we are going to select Empty job (although the next post will be about the Configuration as Code 🙂
As before we will name the Build Pipeline and the Agent Job Step and click the + to add a new task. This time we will search for the Task Group name that we created
I need to add in the variables from the variable.tf in the code and also for the Task Group
and when I click save and queue
It runs for less than 7 minutes
and when I look in the Azure portal
and I can connect in Azure Data Studio
Altering The Task Group
You can find the Task Groups under Pipelines in your Azure DevOps project
Click on the Task Group that you have created and then you can alter, edit it if required and click save
This will warn you that any changes will affect all pipelines and task groups that are using this task group. To find out what will be affected click on references
which will show you what will be affected.
Now I can run the same build steps for any Build Pipeline and alter them all in a single place using Task Groups simplifying the administration of the Build Pipelines.
His instructions worked perfectly and I thought I would try them using a docker-compose file as I like the ease of spinning up containers with them.
I created a docker-compose file like this which will map my backup folder on my Windows 10 laptop to a directory on the container and two more folders to the system folders on the container in the same way as Andrew has in his blog.
This will build the containers as defined in the docker-compose file. The -d runs the container in the background. This was the result.
UPDATE – 2019-03-27
I have no idea why, but today it has worked as expected using the above docker-compose file. I had tried this a couple of times, restarted docker and restarted my laptop and was consistently getting the results below – however today it has worked
So feel free to carry on reading, it’s a fun story and it shows how you can persist the databases in a new container but the above docker-compose has worked!
This is an evaluation version. There are  days left in the evaluation period. This program has encountered a fatal error and cannot continue running at Tue Mar 26 19:40:35 20 19 The following diagnostic information is available: Reason: 0x00000006 Status: 0x40000015 Message: Kernel bug check Address: 0x6b643120 Parameters: 0x10861f680 Stacktrace: 000000006b72d63f 000000006b64317b 000000006b6305ca 000000006b63ee02 000000006b72b83a 000000006b72a29d 000000006b769c02 000000006b881000 000000006b894000 000000006b89c000 0000000000000001 Process: 7 – sqlservr Thread: 11 (application thread 0x4) Instance Id: e01b154f-7986-42c6-ae13-c7d34b8b257d Crash Id: 8cbb1c22-a8d6-4fad-bf8f-01c6aa5389b7 Build stamp: 0e53295d0e1704ae5b221538dd6e2322cd46134e0cc32be49c887ca84cdb8c10 Distribution: Ubuntu 16.04.6 LTS Processors: 2 Total Memory: 4906205184 bytes Timestamp: Tue Mar 26 19:40:35 2019 Ubuntu 16.04.6 LTS Capturing core dump and information to /var/opt/mssql/log… dmesg: read kernel buffer failed: Operation not permitted No journal files were found. No journal files were found. Attempting to capture a dump with paldumper WARNING: Capture attempt failure detected Attempting to capture a filtered dump with paldumper WARNING: Attempt to capture dump failed. Reference /var/opt/mssql/log/core.sqlservr.7.temp/log/ paldumper-debug.log for details Attempting to capture a dump with gdb WARNING: Unable to capture crash dump with GDB. You may need to allow ptrace debugging, enable the CAP_SYS_PTRACE capability, or run as root.
which told me that …………. it hadn’t worked. So I removed the containers with
I thought I would create the volumes ahead of time like Andrew’s blog had mentioned with
docker volume create mssqlsystem
docker volume create mssqluser
and then use the volume names in the docker-compose file mapped to the system folders in the container, this time the result was
ERROR: Named volume “mssqlsystem:/var/opt/sqlserver:rw” is used in service “2019-CTP23” but no declaration was found in the volumes section.
So that didnt work either 🙂
I decided to inspect the volume definition using
docker volume inspect mssqlsystem
I can see the mountpoint is /var/lib/docker/volumes/mssqlsystem/_data so I decided to try a docker-compose like this
and then ran docker-compose up without the -d flag so that I could see all of the output
You can see in the output that the system database files are being moved. Thatlooks like it is working so I used CTRL + C to stop the container and return the terminal. I then ran docker-compose up -d and
I created a special database for Andrew.
I could then remove the container with
To make sure there is nothing up my sleeve I altered the docker-compose file to use a different name and port but kept the volume definitions the same.
I ran docker-compose up -d again and connected to the new container and lo and behold the container is still there
So after doing this, I have learned that to persist the databases and to use docker-compose files I had to map the volume to the mountpoint of the docker volume. Except I haven’t, I have learned that sometimes weird things happen with Docker on my laptop!!
See the change log for additional details of what’s in this release.
Once you have installed you can connect to an instance, right click and choose New Notebook or you can use File – New Notebook
Incidentally, I use the docker-compose file here to create the containers and I map C:\MSSQL\BACKUP\KEEP on my local machine (where my backups are) to /var/opt/mssql/backups on the containers on lines 10 and 17 of the docker-compose so change as required . If you want to follow along then put the ValidationResults.bak in the folder on your local machine. The Create-Ag.ps1 shows the code and creates an AG with dbatools. But I digress!
Install Notebook Dependencies
Once you click New Notebook you will get a prompt to install the dependencies.
It will show its output
and take a few minutes to run
It took all but 11 minutes on my machine
Create a Notebook
OK, so now that we have the dependencies installed we can create a notebook. I decided to use the ValidationResults database that I use for my dbachecks demos and describe here. I need to restore it from my local folder that I have mapped as a volume to my container. Of course, I use dbatools for this 🙂
I had already got a connection saved to the instance in Azure Data Studio, you may need to create a new one using the new connection icon at the top left and filling in the details. The password is in the code above.
Now I can start with my notebook. I am faced with this
I click on text and provide an intro
Once I had written that and clicked out, I couldn’t see what to do straight away!
Then I saw the code and text buttons at the top 🙂 Right, lets get on with it 🙂 I hit the code button and paste in the T-SQL to reset the dates in the database to simulate dbachecks having been run this morning.
There’s a run cell button on the right and when I press it
If the SQL query has results then they are shown as well
This is fun and I can see plenty of uses for it. Go and have a play with SQL notebooks 🙂
I used CTRL K, CTRL O to open a folder and saved my notebook in my local Presentations folder which is source controlled. When I opened the explorer CTRL + SHIFT + E I can see that the folder and the file are colour coded green and have a U next to them marking them as Untracked. I can also see that the source control icon has a 1 for the number of files with changes and in the bottom left that I am in the master branch.
If I click on the source control icon (or CTRL + SHIFT + G) I can see the files with the changes and can enter a commit message
I then press CTRL + ENTER to commit my change and get this pop-up
As I only have one file and it has all the changes for this commit I click yes. If I had changed more than one file and only wanted to commit a single one at a time I would hover my mouse over the file and click the + to stage my change.
If I make a further change to the notebook and save it, I can see that the source control provider recognises the change but this time the folder the file is in and the file are colour coded brown with an M to show that they have been modified.
Unlike Visual Studio Code, when you then click on the source control icon and click on the change it does not show the differences in the notebook although this works with SQL files.
When I have made all my changes and committed them with good commit messages
I can see that there are 3 local changes ready to be pushed to by remote repository (GitHub in this case) and 0 remote commits in this branch by looking at the bottom left
I can click on the “roundy roundy” icon (I dont know its proper name 😊) and synchronise my changes. This comes with a pop-up
Personally I never press OK, Don’t Show Again because I like the double check and to think “Is this really what I want to do right now”. Once I press OK my changes will be synched with the remote repository. Explaining this means that you can find the notebook I have used in my Presentations GitHub Repository which means that you can run the Notebook too using the docker-compose file here and the instructions further up in the post.