Now I can use this yaml as configuration as code for my Build Pipeline 🙂 It can be used from any Azure DevOps project. Once you start looking at the code and the documentation for the yaml schema you can begin to write your pipelines as YAML, but sometimes it is easier to just create build pipeline or even just a job step in the browser and click the view yaml button!
Create an AKS Cluster with a SQL 2019 container using Terraform and Build templates
This time I am going to choose the Configuration as code template
I am going to give it a name and it will show me that it needs the path to the yaml file containing the build definition in the current repository.
Clicking the 3 ellipses will pop-up a file chooser and I pick the build.yaml file
The build.yaml file looks like this. The name is the USER/Repository Name and the endpoint is the name of the endpoint for the GitHub service connection in Azure DevOps. The template value is the name of the build yaml file @ the name given for the repository value.
You can find (and change) your GitHub service connection name by clicking on the cog bottom left in Azure DevOps and clicking service connections
I still need to create my variables for my Terraform template (perhaps I can now just leave those in my code?) For the AKS Cluster build right now I have to add presentation, location, ResourceGroupName, AgentPoolName, ServiceName, VMSize, agent_count
Then I click save and queue and the job starts running
If I want to edit the pipeline it looks a little different
The variables and triggers can be found under the 3 ellipses on the top right
It also defaults the trigger to automatic deployment.
It takes a bit longer to build
and when I get the Terraform code wrong and the build fails, I can just alter the code, commit it, push and a new build will start and the Terraform will work out what is built and what needs to be built!
but eventually the job finishes successfully
and the resources are built
and in Visual Studio Code with the Kubernetes extension installed I can connect to the cluster by clicking the 3 ellipses and Add Existing Cluster
I choose Azure Kubernetes Services and click next
Choose my subscription and then add the cluster
and then I can explore my cluster
I can also see the dashboard by right clicking on the cluster name and Open Dashboard
Right clicking on the service name and choosing describe
shows the external IP address, which I can put into Azure Data Studio and connect to my container
So I now I can source control my Build Job Steps and hold them in a central repository. I can make use of them in any project. This gives me much more control and saves me from repeating myself repeating myself. The disadvantage is that there is no handy warning when I change the underlying Build Repository that I will be affecting other Build Pipelines and there is no easy method to see which Build Pipelines are dependent on the build yaml file
In my last post I showed how to build an Azure DevOps Pipeline for a Terraform build of an Azure SQLDB. This will take the terraform code and build the required infrastructure.
The plan all along has been to enable me to build different environments depending on the requirement. Obviously I can repeat the steps from the last post for a new repository containing a Terraform code for a different environment but
If you are going to do something more than once Automate It
who first said this? Anyone know?
The build steps for building the Terraform are the same each time (if I keep a standard folder and naming structure) so it would be much more beneficial if I could keep them in a single place and any alterations to the process only need to be made in the one place 🙂
A task group allows you to encapsulate a sequence of tasks, already defined in a build or a release pipeline, into a single reusable task that can be added to a build or release pipeline, just like any other tas
Here’s the thing, creating a task group is so easy it should be the default way you create Azure DevOps Pipelines. Let me walk you through it
I will use the Build Pipeline from the previous post. Click edit from the build page
Then CTRL and click to select all of the steps
Right Click and theres a Create Task Group button to click !
You can see that it has helpfully added the values for the parameters it requires for the location, Storage Account and the Resource Group.
Remember the grey beard hair above? We need to change those values to use the variables that we will add to the Build Pipeline using
Once you have done that click Create
This will also alter the current Build Pipeline to use the Task Group. Now we have a Task Group that we can use in any build pipeline in this project.
Using the Task Group with a new Build Pipeline to build an Azure Linux SQL VM
Lets re-use the build steps to create an Azure SQL Linux VM. First I created a new GitHub Repository for my Terraform code. Using the docs I created the Terraform to create a resource group, a Linux SQL VM, a virtual network, a subnet, a NIC for the VM, a public IP for the VM, a netwwork security group with two rules, one for SQL and one for SSH. It will look like this
The next step is to choose the repository
again we are going to select Empty job (although the next post will be about the Configuration as Code 🙂
As before we will name the Build Pipeline and the Agent Job Step and click the + to add a new task. This time we will search for the Task Group name that we created
I need to add in the variables from the variable.tf in the code and also for the Task Group
and when I click save and queue
It runs for less than 7 minutes
and when I look in the Azure portal
and I can connect in Azure Data Studio
Altering The Task Group
You can find the Task Groups under Pipelines in your Azure DevOps project
Click on the Task Group that you have created and then you can alter, edit it if required and click save
This will warn you that any changes will affect all pipelines and task groups that are using this task group. To find out what will be affected click on references
which will show you what will be affected.
Now I can run the same build steps for any Build Pipeline and alter them all in a single place using Task Groups simplifying the administration of the Build Pipelines.
In my last post I showed how to create a Resource Group and an Azure SQLDB with Terraform using Visual Studio Code to deploy.
Of course, I havent stopped there, who wants to manually run code to create things. There was a lot of install this and set up that. I would rather give the code to a build system and get it to run it. I can then even set it to automatically deploy new infrastructure when I commit some code to alter the configuration.
This scenario though is to build environments for presentations. Last time I created an Azure SQL DB and tagged it with DataInDevon (By the way you can get tickets for Data In Devon here – It is in Exeter on April 26th and 27th)
If I want to create the same environment but give it tags for a different event (This way I know when I can delete resources in Azure!) or name it differently, I can use Azure DevOps and alter the variables. I could just alter the code and commit the change and trigger a build or I could create variables and enable them to be set at the time the job is run. I use the former in “work” situations and the second for my presentations environment.
I have created a project in Azure DevOps for my Presentation Builds. I will be using GitHub to share the code that I have used. Once I clicked on pipelines, this is the page I saw
Clicking new pipeline, Azure DevOps asked me where my code was
I chose GitHub, authorised and chose the repository.
I then chose Empty Job on the next page. See the Configuration as code choice? We will come back to that later and our infrastructure as code will be deployed with a configuration as code 🙂
The next page allows us to give the build a good name and choose the Agent Pool that we want to use. Azure DevOps gives 7 different hosted agents running Linux, Mac, Windows or you can download an agent and run it on your own cpus. We will use the default agent for this process.
Clicking on Agent Job 1 enables me to change the name of the Agent Job. I could also choose a different type of Agent for different jobs within the same pipeline. This would be useful for testing different OS’s for example but for right now I shall just name it properly.
First we need somewhere to store the state of our build so that if we re-run it the Terraform plan step will be able to work out what it needs to do. (This is not absolutely required just for building my presentation environments and this might not be the best way to achieve this but for right now this is what I do and it works.)
I click on the + and search for Azure CLI.
and click on the Add button which gives me some boxes to fill in.
I choose my Azure subscription from the first drop down and choose Inline Script from the second
Inside the script block I put the following code
# the following script will create Azure resource group, Storage account and a Storage container which will be used to store terraform state
call az groupcreate--location$(location)--name$(TerraformStorageRG)
call az storage account create--name$(TerraformStorageAccount)--resource-group$(TerraformStorageRG)--location$(location)--sku Standard_LRS
call az storage container create--name terraform--account-name$(TerraformStorageAccount)
This will create a Resource Group, a storage account and a container and use some variables to provide the values, we will come back to the variables later.
The next thing that we need to do is to to enable the job to be able to access the storage account. We don’t want to store that key anywhere but we can use our Azure DevOps variables and some PowerShell to gather the access key and write it to the variable when the job is running . To create the variables I clicked on the variables tab
and then added the variables with the following names TerraformStorageRG, TerraformStorageAccount and location from the previous task and TerraformStorageKey for the next task.
With those created, I go back to Tasks and add an Azure PowerShell task
I then add this code to get the access key and overwrite the variable.
# Using this script we will fetch storage key which is required in terraform file to authenticate backend stoarge account
description="The name of the Azure SQL database on - needs to be unique, lowercase between 3 and 24 characters including the prefix"
description="The Edition of the Database - Basic, Standard, Premium, or DataWarehouse"
description="The Service Tier S0, S1, S2, S3, P1, P2, P4, P6, P11 and ElasticPool"
It is exactly the same except that the values have been replaced by the value name prefixed and suffixed with __. This will enable me to replace the values with the variables in my Azure DevOps Build job.
The backend-config.tf file will store the details of the state that will be created by the first step and use the access key that has been retrieved in the second step.
I need to add the following variables to my Azure DevOps Build – Presentation, ResourceGroupName, SqlServerName, SQLServerAdminUser, SQLServerAdminPassword, SqlDatabaseName, Edition, ServiceObjective . Personally I would advise setting the password or any other sensitive values to sensitive by clicking the padlock for that variable. This will stop the value being written to the log as well as hiding it behind *’s
Because I have tagged the variables with Settable at queue time , I can set the values whenever I run a build, so if I am at a different event I can change the name.
But the build job hasn’t been set up yet. First we need to replace the values in the variables file.
I am going to use a standard naming convention for my infrastructure code files so I add Build to the Root Directory. You can also click the ellipses and navigate to a folder in your repo. In the Target Files I add *”*/*.tf” and “**/*.tfvars” which will search all of the folders (**) and only work on files with a .tf or .tfvars extension (/*.tfvars) The next step is to make sure that the replacement prefix and suffix are correct. It is hidden under Advanced
Because I often forget this step and to aid in troubleshooting I add another step to read the contents of the files and place them in the logs. I do this by adding a PowerShell step which uses
Under control options there is a check box to enable or disable the steps so once I know that everything is ok with the build I will disable this step. The output in the log of a build will look like this showing the actual values in the files. This is really useful for finding spaces :-).
Running the Terraform in Azure DevOps
With everything set up we can now run the Terraform. I installed the Terraform task from the marketplace and added a task. We are going to follow the same process as the last blog post, init, plan, apply but this time we are going to automate it 🙂
First we will initialise
I put Build in the Terraform Template path. The Terraform arguments are
which will tell the Terraform to use the backend-config.tfvars file for the state. It is important to tick the Install terraform checkbox to ensure that terraform is available on the agent and to add the Azure Subscription (or Service Endpoint in a corporate environment
After the Initialise, I add the Terraform task again add Build to the target path and this time the argument is plan
Again, tick the install terraform checkbox and also the Use Azure Service Endpoint and choose the Azure Subscription.
We also need to tell the Terraform where to find the tfstate file by specifying the variables for the resource group and storage account and the container
Finally, add another Terraform task for the apply remembering to tick the install Terraform and Use Azure checkboxes
Now we can build the environment – Clicking Save and Queue
opens this dialogue
where the variables can be filled in.
The build will be queued and clicking on the build number will open the logs
6 minutes later the job has finished
and the resources have been created.
If I want to look in the logs of the job I can click on one of the steps and take a look. This is the apply step
Do it Again For Another Presentation
So that is good, I can create my environment as I want it. Once my presentation has finished I can delete the Resource Groups. When I need to do the presentation again, I can queue another build and change the variables
The job will run
and the new resource group will be created
all ready for my next presentation 🙂
This is brilliant, I can set up the same solution for different repositories for different presentations (infrastructure) and recreate the above steps.
I have been using Terraform for the last week or so to create some infrastructure and decided to bring that knowledge back to a problem that I and others suffer from – building environments for presentations, all for the sake of doing some learning.
What is Terraform?
According to the website
HashiCorp Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned
This means that I can define my infrastructure as code. If I can do that then I can reliably do the same thing again and again, at work to create environments that have the same configuration or outside of work to repeatedly build the environment I need.
refers to the name property in the azure_resource_group block called test (or the name of the resource group 🙂 )
Infrastructure As Code
So I can put that code into a file (name it main.tf) and alter it with the values and “run Terraform” and what I want will be created. Lets take it a step further though because I want to be able to reuse this code. Instead of hard-coding all of the values I am going to use variables. I can do this by creating another file called variables.tf which looks like
description="The name of the presentation - used for tagging Azure resources so I know what they belong to"
description="The Resource Group Name"
description="The Azure Region in which the resources in this example should exist"
description="The name of the Azure SQL Server to be created or to have the database on - needs to be unique, lowercase between 3 and 24 characters including the prefix"
description="The name of the Azure SQL Server Admin user for the Azure SQL Database"
Allen wanted to add his scripts folder to source control but didn’t have a how to do it handy. So I thought I would write one. Hopefully this will enable someone new to GitHub and to source control get a folder of scripts under source control
and up at the top you will see a little icon – initialise repository
Click that and choose your folder
Which will then show all of the changes to the repository (adding all the new files)
Now we need to add a commit message for our changes. I generally try to write commit messages that are the reason why the change has been made as the what has been changed is made easy to see in VS Code (as well as other source control GUI tools)
Click the tick or press CTRL + ENTER and this box will pop up
I never click Always, I click yes, so that I can check if I am committing the correct files. Now we have created a local repository for our scripts folder. Our next step is to publish it to GitHub
Create a New Repository in GitHub
In Github we need to create a remote repository. Click on the New Button. Give your repository a name and decide if you want it to be Public (available for anyone to search and find) or Private (only available to people you explicitly provide access to).
This will give you a page that looks like this
Copy the code after …or push an existing repository from the command line
2019-04-0220:55:16.35spid53 During undoing ofalogged operation indatabase'AdventureWorks2014'(page(0:0)ifany),an error occurred at log record ID(65:6696:25).Typically,the specific failure is logged previously as an error inthe operating system error log.Restore the database or file from
abackup,or repair the database.
2019-04-0220:55:16.37spid53 Database AdventureWorks2014 was shutdown due to error3314inroutine'XdesRMReadWrite::RollbackToLsn'.Restart fornon-snapshotdatabases will be attempted after all connections to the database are aborted.
Restart packet created fordbid5.
2019-04-0220:55:16.41spid53 Error during rollback.shutting down database(location:1).
2019-04-0220:55:16.44spid53 During undoing ofalogged operation indatabase'AdventureWorks2014'(page(0:0)ifany),an error occurred at log record ID(65:6696:25).Typically,the specific failure is logged previously as an error inthe operating system error log.Restore the database or file from
2019-04-0220:55:17.90spid76 The log fordatabase'master'is not available.Check the operating
system error log forrelated error messages.Resolve any errors and restart the database.
Master eh? Now what will you do?
2019-04-0220:55:25.55spid5229transactions rolled forward indatabase'AdventureWorks2014'(5:0).This is an informational message only.No user action is required.
2019-04-0220:55:25.90spid521transactions rolled back indatabase'AdventureWorks2014'(5:0).This is an informational message only.No user action is required.
2019-04-0220:55:25.90spid52 Recovery is writingacheckpoint indatabase'AdventureWorks2014'(5).This is an informational message only.No user action is required.
2019-04-0220:55:26.16spid52 Recovery completed fordatabase AdventureWorks2014(database ID5)in7second(s)(analysis424ms,redo5305ms,undo284ms.)This is an informational message only.No user action is required.
2019-04-0220:55:26.21spid52 Parallel redo is shutdown fordatabase'AdventureWorks2014'with worker pool size.
2019-04-0220:55:26.27spid52 CHECKDB fordatabase'AdventureWorks2014'finished without errors on2018-03-2400:38:39.313(local time).This is an informational message only;no user action is required.
Both of these run a random query in a single thread so I thought I would use PoshRSJob by Boe Prox b | t to run multiple queries at the same time 🙂
To install PoshRSJob, like with any PowerShell module, you run
I downloaded AdventureWorksBOLWorkload zip from Pieters blog post and extracted to my C:\temp folder. I created a Invoke-RandomWorkload function which you can get from my functions repository in Github. The guts of the function are
which will created $NumberOfJobs jobs and then run $Throttle number of jobs in the background until they have all completed. Each job will run a random query from the query file using Invoke-SqlCmd. Why did I use Invoke-SqlCmd and not Invoke-DbaQuery from dbatools? dbatools creates runspaces in the background to help with logging and creating runspaces inside background jobs causes errors
His instructions worked perfectly and I thought I would try them using a docker-compose file as I like the ease of spinning up containers with them.
I created a docker-compose file like this which will map my backup folder on my Windows 10 laptop to a directory on the container and two more folders to the system folders on the container in the same way as Andrew has in his blog.
This will build the containers as defined in the docker-compose file. The -d runs the container in the background. This was the result.
UPDATE – 2019-03-27
I have no idea why, but today it has worked as expected using the above docker-compose file. I had tried this a couple of times, restarted docker and restarted my laptop and was consistently getting the results below – however today it has worked
So feel free to carry on reading, it’s a fun story and it shows how you can persist the databases in a new container but the above docker-compose has worked!
This is an evaluation version. There are  days left in the evaluation period. This program has encountered a fatal error and cannot continue running at Tue Mar 26 19:40:35 20 19 The following diagnostic information is available: Reason: 0x00000006 Status: 0x40000015 Message: Kernel bug check Address: 0x6b643120 Parameters: 0x10861f680 Stacktrace: 000000006b72d63f 000000006b64317b 000000006b6305ca 000000006b63ee02 000000006b72b83a 000000006b72a29d 000000006b769c02 000000006b881000 000000006b894000 000000006b89c000 0000000000000001 Process: 7 – sqlservr Thread: 11 (application thread 0x4) Instance Id: e01b154f-7986-42c6-ae13-c7d34b8b257d Crash Id: 8cbb1c22-a8d6-4fad-bf8f-01c6aa5389b7 Build stamp: 0e53295d0e1704ae5b221538dd6e2322cd46134e0cc32be49c887ca84cdb8c10 Distribution: Ubuntu 16.04.6 LTS Processors: 2 Total Memory: 4906205184 bytes Timestamp: Tue Mar 26 19:40:35 2019 Ubuntu 16.04.6 LTS Capturing core dump and information to /var/opt/mssql/log… dmesg: read kernel buffer failed: Operation not permitted No journal files were found. No journal files were found. Attempting to capture a dump with paldumper WARNING: Capture attempt failure detected Attempting to capture a filtered dump with paldumper WARNING: Attempt to capture dump failed. Reference /var/opt/mssql/log/core.sqlservr.7.temp/log/ paldumper-debug.log for details Attempting to capture a dump with gdb WARNING: Unable to capture crash dump with GDB. You may need to allow ptrace debugging, enable the CAP_SYS_PTRACE capability, or run as root.
which told me that …………. it hadn’t worked. So I removed the containers with
I thought I would create the volumes ahead of time like Andrew’s blog had mentioned with
docker volume create mssqlsystem
docker volume create mssqluser
and then use the volume names in the docker-compose file mapped to the system folders in the container, this time the result was
ERROR: Named volume “mssqlsystem:/var/opt/sqlserver:rw” is used in service “2019-CTP23” but no declaration was found in the volumes section.
So that didnt work either 🙂
I decided to inspect the volume definition using
docker volume inspect mssqlsystem
I can see the mountpoint is /var/lib/docker/volumes/mssqlsystem/_data so I decided to try a docker-compose like this
and then ran docker-compose up without the -d flag so that I could see all of the output
You can see in the output that the system database files are being moved. Thatlooks like it is working so I used CTRL + C to stop the container and return the terminal. I then ran docker-compose up -d and
I created a special database for Andrew.
I could then remove the container with
To make sure there is nothing up my sleeve I altered the docker-compose file to use a different name and port but kept the volume definitions the same.
I ran docker-compose up -d again and connected to the new container and lo and behold the container is still there
So after doing this, I have learned that to persist the databases and to use docker-compose files I had to map the volume to the mountpoint of the docker volume. Except I haven’t, I have learned that sometimes weird things happen with Docker on my laptop!!