PowerShell in SQL Notebooks in Azure Data Studio

I have done a lot of writing in the last few months but you see no blog posts! My wonderful friend Chrissy and I are writing “dbatools in a Month of Lunches” to be published by Manning. That has taken up a lot of my writing mojo. We have hit a little break whilst we have some reviews done ready for the MEAP (For everyone who asks, the answer is the unfulfilling ‘soon’) so it’s time for a blog post!

SQL Notebooks are cool

I have had a lot of fun with SQL Notebooks recently. I have presented a session about them at a couple of events this month DataGrillen and SQL Saturday Cork. Here is a little snippet

Yes, you can run PowerShell in a SQL Notebook in Azure Data Studio just by clicking a link in the markdown cell. This opens up a lot of excellent possibilities.

I have had several discussions about how SQL Notebooks can be used by SQL DBAs within their normal everyday roles. (Mainly because I don’t really understand what the sorcerers of data science do with notebooks!). I have helped clients to look at some of their processes and use SQL Notebooks to help with them. Creating Disaster Recovery or Change Run-books or Incident Response Templates or using them for product demonstrations. Of course, I needed to use PowerShell in that πŸ™‚

I have really enjoyed working out how to run PowerShell in the markdown in a SQL Notebook in Azure Data Studio and I think Anthony the kubernetes magician did too!

OK enough magic puns lets talk about PowerShell in SQL Notebooks. You can read about how to create a SQL Notebook and run T-SQL queries here, (you no longer need the Insider Edition by the way)

PowerShell in Markdown!

First, before I go any further, I must say this. I was at the European PowerShell Conference when I was working this out and creating my sessions and I said the words

“Cool, I can click a link and run PowerShell, this is neat”

A Beardy fellow in Hannover

This stopped some red team friends of mine in their tracks and they said “Show me”. One of them was rubbing their hands with glee! You can imagine the sort of wicked, devious things that they were immediately considering doing.

Yes, it’s funny but also it carries a serious warning. Without understanding what it is doing, please don’t enable PowerShell to be run in a SQL Notebook that someone sent you in an email or you find on a GitHub. In the same way as you don’t open the word document attachment which will get a thousand million trillion pounddollars into your bank account or run code you copy from the internet on production without understanding what it does, this could be a very dangerous thing to do.

With that warning out of the way, there are loads of really useful and fantastic use cases for this. SQL Notebooks make great run-books or incident response recorders and PowerShell is an obvious tool for this. (If only we could save the PowerShell output in a SQL Notebook, this would be even better)

How on earth did you work this out?

Someone asked me how I worked it out. I didn’t! It began with Vicky Harp PM lead for the SQL Tools team at Microsoft

I then went and looked at Kevin Cunnane‘s notebook. Kevin is a member of the tools team working on Azure Data Studio. With SQL Notebooks, you can double click the markdown cell and see the code that is behind it. To understand how it is working, lets deviate a little.

Keyboard Shortcuts

IF you click the cog at the bottom left of Azure Data Studio and choose Keyboard Shortcuts

you can make Azure Data Studio (and Visual Studio Code) work exactly how you want it to. Typing in the top box will find a command and you can then set the shortcuts that you want to use to save yourself time.

This also enables you to see the command that is called when you use a keyboard shortcut. For example, you can see that for the focus terminal command it says workbench.action.terminal.focus.

It turns out that you can call this as a link in a Markdown document using HTML with <a href=""> and adding command: prior to the command text. When the link is clicked the command will run. Cool πŸ™‚

For this to be able to work (you read the warning above?) you need to set the Notebook to be trusted by clicking this button.

This will allow any command to be run. Of course, people with beards will helpfully advise when this is required for a SQL Notebook. (Safe to say people attempting nefarious actions will try the same with your users)

Now that we know how to run an Azure Data Studio command using a link in a markdown cell the next step is to run a PowerShell command. I headed to the Visual Studio Code documentation and found

Send text from a keybinding
The workbench.action.terminal.sendSequence command can be used to send a specific sequence of text to the terminal, including escape sequence

That’s the command we need, however, we still need to craft the command so that it will work as a link. It needs to be converted into a URL.

I started by using this website https://www.url-encode-decode.com/ to do this. This is how you can check the code in other peoples notebook, use the decode capability.

Encoding Set-Location C:\dbachecks gives Set-Location+C%3A%5Cdbacheck`

So I can just put that code into the href link and bingo!

If only it was that easy!!

Some Replacing is required

The + needs to be replaced with a space or %20

You also need to double the \ and replace the %3A with a :
The " needs to be replaced with \u022, the ' with \u027, the curly braces won’t work unless you remove the %0D%0A. Got all that? Good!

Once you have written your PowerShell, encoded it, performed the replacements, you add \u000D at the end of the code to pass an enter to run the code and then place all of that into a link like this

<a href="command:workbench.action.terminal.sendSequence?%7B%22text%22%3A%22 PLACE THE ENCODED CODE HERE %22%7D">Link Text</a>

This means that if you want to add the PowerShell code to set a location and then list the files and folders in that location to a Markdown cell using PowerShell like this

Set-Location C:\dbachecks
Get-ChildItem

You would end up with a link like this

<a href="command:workbench.action.terminal.sendSequence?%7B%22text%22%3A%22 Set-Location C:%5C%5Cdbachecks \u000D Get-ChildItem \u000D %22%7D">Set Location and list files</a>

Doing something more than once?

I don’t want to remember that all of the time so I wrote a PowerShell function. You can find it on GitHub https://github.com/SQLDBAWithABeard/Functions/blob/master/Convert-ADSPowerShellForMarkdown.ps1

This will take a PowerShell command and turn it into a link that will work in an Azure Data Studio markdown. It’s not magic, it’s PowerShell. There is a –ToClipboard parameter which will copy the code to the clipboard ready for you to paste into the cell (On Windows machines only)

Giants

There are many uses for this but here’s one I think is cool.

The link below will go to a notebook, which will show how you the giants upon whose shoulders I stand

Glenn Berry,
Chrissy LeMaire,
AndrΓ© Kamman,
Gianluca Sartori

have enabled me to create a SQL Notebook with a link which will run some PowerShell to create a SQL Notebook which will have all of the Diagnostic Queries in it.

You could possibly use something like it for your incident response SQL Notebook.

It’s also cool that GitHub renders the notebook in a browser (You can’t run PowerShell or T-SQL from there though, you need Azure Data Studio!)

https://github.com/SQLDBAWithABeard/Presentations/blob/master/2019/Berlin%20SQL%20User%20Group/04%20-%20Glenn%20Berry%20Notebook.ipynb


Azure SQL Linux VM – configuring SQL, installing pwsh and connecting and interacting with dbatools

In my posts about using Azure Devops to build Azure resources with Terraform, I built a Linux SQL VM. I used the Terrafrom in this GitHub repository and created this

Connecting with MobaXterm

I had set the Network security rules to accept connections only from my static IP using variables in the Build Pipeline. I use MobaXterm as my SSH client. Its a free download. I click on sessions

Choose a SSH session and fill in the remote host address from the portal

fill in the password and

Configuring SQL

The next task is to configure the SQL installation. Following the instructions on the Microsoft docs site I run

sudo systemctl stop mssql-server
sudo /opt/mssql/bin/mssql-conf set-sa-password

enter the sa password and

Now to start SQL

sudo systemctl start mssql-server

Installing pwsh

Installing PowerShell Core (pwsh) is easy with snap

sudo snap install powershell --classic

A couple of minutes of downloads and install

and pwsh is ready for use

Installing dbatools

To install dbatools from the Powershell Gallery simply run

Install-Module dbatools -Scope CurrentUser

This will prompt you to allow installing from an untrusted repository

and dbatools is ready to go

#Set a credential
$cred = Get-Credential
# Show the databases on the local instance
Get-DbaDatabase -SqlInstance localhost -SqlCredential $cred

Connecting with Azure Data Studio

I can also connect with Azure Data Studio

and connect

Just a quick little post explaining what I did πŸ™‚

Happy Linuxing!

Using Azure DevOps Build Pipeline Templates with Terraform to build an AKS cluster

In the last few posts I have moved from building an Azure SQL DB with Terraform using VS Code to automating the build process for the Azure SQL DB using Azure DevOps Build Pipelines to using Task Groups in Azure DevOps to reuse the same Build Process and build an Azure Linux SQL VM and Network Security Group. This evolution is fantastic but Task Groups can only be used in the same Azure DevOps repository. It would be brilliant if I could use Configuration as Code for the Azure Build Pipeline and store that in a separate source control repository which can be used from any Azure DevOps Project.

Luckily, you can πŸ˜‰ You can use Azure DevOps Job Templates to achieve this. There is a limitation at present, you can only use them for Build Pipelines and not Release Pipelines.

The aim of this little blog series was to have a single Build Pipeline stored as code which I can use to build any infrastructure that I want with Terraform in Azure and be able to use it anywhere

Creating a Build Pipeline Template

I created a GitHub repository to hold my Build Templates, feel free to use them as a base for your own but please don’t try and use the repo for your own builds.

The easiest way to create a Build Template is to already have a Build Pipeline. This cannot be done from a Task Group but I still have the Build Pipeline from my automating the build process for the Azure SQL DB using Azure DevOps Build Pipelines blog post.

There is a View YAML button. I can click this to view the YAML definition of the Build Pipeline

I copy that and paste it into a new file in my BuildTemplates repository. (I have replaced my Azure Subscription information in the public repository)

jobs:
- job: Build
  pool:
    name: Hosted VS2017
    demands: azureps
  steps:
  - task: AzureCLI@1
    displayName: 'Azure CLI to deploy azure storage for backend'
    inputs:
      azureSubscription: 'PUTYOURAZURESUBNAMEHERE'
      scriptLocation: inlineScript
      inlineScript: |
        # the following script will create Azure resource group, Storage account and a Storage container which will be used to store terraform state
        call az group create --location $(location) --name $(TerraformStorageRG)
        
        call az storage account create --name $(TerraformStorageAccount) --resource-group $(TerraformStorageRG) --location $(location) --sku Standard_LRS
        
        call az storage container create --name terraform --account-name $(TerraformStorageAccount)

  - task: AzurePowerShell@3
    displayName: 'Azure PowerShell script to get the storage key'
    inputs:
      azureSubscription: 'PUTYOURAZURESUBNAMEHERE'
      ScriptType: InlineScript
      Inline: |
        # Using this script we will fetch storage key which is required in terraform file to authenticate backend stoarge account
      
        $key=(Get-AzureRmStorageAccountKey -ResourceGroupName $(TerraformStorageRG) -AccountName $(TerraformStorageAccount)).Value[0]
      
        Write-Host "##vso[task.setvariable variable=TerraformStorageKey]$key"
      azurePowerShellVersion: LatestVersion

  - task: qetza.replacetokens.replacetokens-task.replacetokens@3
    displayName: 'Replace tokens in terraform file'
    inputs:
      rootDirectory: Build
      targetFiles: |
        **/*.tf
        **/*.tfvars
      tokenPrefix: '__'
      tokenSuffix: '__'

  - powershell: |
      Get-ChildItem .\Build -Recurse
    
      Get-Content .\Build\*.tf 
      Get-Content .\Build\*.tfvars 
    
      Get-ChildItem Env: | select Name
    displayName: 'Check values in files'
    enabled: false

  - task: petergroenewegen.PeterGroenewegen-Xpirit-Vsts-Release-Terraform.Xpirit-Vsts-Release-Terraform.Terraform@2
    displayName: 'Initialise Terraform'
    inputs:
      TemplatePath: Build
      Arguments: 'init -backend-config="0-backend-config.tfvars"'
      InstallTerraform: true
      UseAzureSub: true
      ConnectedServiceNameARM: 'PUTYOURAZURESUBNAMEHERE'

  - task: petergroenewegen.PeterGroenewegen-Xpirit-Vsts-Release-Terraform.Xpirit-Vsts-Release-Terraform.Terraform@2
    displayName: 'Plan Terraform execution'
    inputs:
      TemplatePath: Build
      Arguments: plan
      InstallTerraform: true
      UseAzureSub: true
      ConnectedServiceNameARM: 'PUTYOURAZURESUBNAMEHERE'

  - task: petergroenewegen.PeterGroenewegen-Xpirit-Vsts-Release-Terraform.Xpirit-Vsts-Release-Terraform.Terraform@2
    displayName: 'Apply Terraform'
    inputs:
      TemplatePath: Build
      Arguments: 'apply -auto-approve'
      InstallTerraform: true
      UseAzureSub: true
      ConnectedServiceNameARM: 'PUTYOURAZURESUBNAMEHERE'

Now I can use this yaml as configuration as code for my Build Pipeline πŸ™‚ It can be used from any Azure DevOps project. Once you start looking at the code and the documentation for the yaml schema you can begin to write your pipelines as YAML, but sometimes it is easier to just create build pipeline or even just a job step in the browser and click the view yaml button!

Create an AKS Cluster with a SQL 2019 container using Terraform and Build templates

I have a GitHub Repository with the Terraform code to build a simple AKS cluster. This could not have been achieved without Richard Cheney’s article I am not going to explain how it all works for this blog post or some of the negatives of doing it this way. Instead lets build an Azure DevOps Build Pipeline to build it with Terraform using Configuration as Code (the yaml file)

I am going to create a new Azure DevOps Build Pipeline and as in the previous posts connect it to the GitHub Repository holding the Terraform code.

This time I am going to choose the Configuration as code template

I am going to give it a name and it will show me that it needs the path to the yaml file containing the build definition in the current repository.

Clicking the 3 ellipses will pop-up a file chooser and I pick the build.yaml file

The build.yaml file looks like this. The name is the USER/Repository Name and the endpoint is the name of the endpoint for the GitHub service connection in Azure DevOps. The template value is the name of the build yaml file @ the name given for the repository value.

resources:
  repositories:
    - repository: templates
      type: github
      name: SQLDBAWithABeard/Presentations-BuildTemplates-Private
      endpoint: SQLDBAWithABeardGitHub

jobs:
- template: AzureTerraform.yaml@templates  # Template reference

You can find (and change) your GitHub service connection name by clicking on the cog bottom left in Azure DevOps and clicking service connections

I still need to create my variables for my Terraform template (perhaps I can now just leave those in my code?) For the AKS Cluster build right now I have to add presentation, location, ResourceGroupName, AgentPoolName, ServiceName, VMSize, agent_count

Then I click save and queue and the job starts running

If I want to edit the pipeline it looks a little different

The variables and triggers can be found under the 3 ellipses on the top right

It also defaults the trigger to automatic deployment.

It takes a bit longer to build

and when I get the Terraform code wrong and the build fails, I can just alter the code, commit it, push and a new build will start and the Terraform will work out what is built and what needs to be built!

but eventually the job finishes successfully

and the resources are built

and in Visual Studio Code with the Kubernetes extension installed I can connect to the cluster by clicking the 3 ellipses and Add Existing Cluster

I choose Azure Kubernetes Services and click next

Choose my subscription and then add the cluster

and then I can explore my cluster

I can also see the dashboard by right clicking on the cluster name and Open Dashboard

Right clicking on the service name and choosing describe

shows the external IP address, which I can put into Azure Data Studio and connect to my container

So I now I can source control my Build Job Steps and hold them in a central repository. I can make use of them in any project. This gives me much more control and saves me from repeating myself repeating myself. The disadvantage is that there is no handy warning when I change the underlying Build Repository that I will be affecting other Build Pipelines and there is no easy method to see which Build Pipelines are dependent on the build yaml file

Happy Automating

Using the same Azure DevOps build steps for Terraform with different Pipelines with Task Groups to build an Azure Linux SQL VM

In my last post I showed how to build an Azure DevOps Pipeline for a Terraform build of an Azure SQLDB. This will take the terraform code and build the required infrastructure.

The plan all along has been to enable me to build different environments depending on the requirement. Obviously I can repeat the steps from the last post for a new repository containing a Terraform code for a different environment but

If you are going to do something more than once Automate It

who first said this? Anyone know?

The build steps for building the Terraform are the same each time (if I keep a standard folder and naming structure) so it would be much more beneficial if I could keep them in a single place and any alterations to the process only need to be made in the one place πŸ™‚

Task Groups

Azure DevOps has task groups. On the Microsoft Docs web-page they are described as


task group allows you to encapsulate a sequence of tasks, already defined in a build or a release pipeline, into a single reusable task that can be added to a build or release pipeline, just like any other tas


https://docs.microsoft.com/en-us/azure/devops/pipelines/library/task-groups?view=azure-devops

If you are doing this with a more complicated existing build pipeline it is important that you read the Before You Create A Task Group on the docs page. This will save you time when trying to understand why variables are not available (Another grey hair on my beard!)

Creating A Task Group

Here’s the thing, creating a task group is so easy it should be the default way you create Azure DevOps Pipelines. Let me walk you through it

I will use the Build Pipeline from the previous post. Click edit from the build page

Then CTRL and click to select all of the steps

Right Click and theres a Create Task Group button to click !

You can see that it has helpfully added the values for the parameters it requires for the location, Storage Account and the Resource Group.

Remember the grey beard hair above? We need to change those values to use the variables that we will add to the Build Pipeline using

$(VariableName)

Once you have done that click Create

This will also alter the current Build Pipeline to use the Task Group. Now we have a Task Group that we can use in any build pipeline in this project.

Using the Task Group with a new Build Pipeline to build an Azure Linux SQL VM

Lets re-use the build steps to create an Azure SQL Linux VM. First I created a new GitHub Repository for my Terraform code. Using the docs I created the Terraform to create a resource group, a Linux SQL VM, a virtual network, a subnet, a NIC for the VM, a public IP for the VM, a netwwork security group with two rules, one for SQL and one for SSH. It will look like this

The next step is to choose the repository

again we are going to select Empty job (although the next post will be about the Configuration as Code πŸ™‚

As before we will name the Build Pipeline and the Agent Job Step and click the + to add a new task. This time we will search for the Task Group name that we created

I need to add in the variables from the variable.tf in the code and also for the Task Group

and when I click save and queue

It runs for less than 7 minutes

and when I look in the Azure portal

and I can connect in Azure Data Studio

Altering The Task Group

You can find the Task Groups under Pipelines in your Azure DevOps project

Click on the Task Group that you have created and then you can alter, edit it if required and click save

This will warn you that any changes will affect all pipelines and task groups that are using this task group. To find out what will be affected click on references


which will show you what will be affected.

Now I can run the same build steps for any Build Pipeline and alter them all in a single place using Task Groups simplifying the administration of the Build Pipelines.

The next post will show how to use Azure DevOps templates to use the same build steps across many projects and build pipelines and will build a simple AKS cluster

The first post showed how to build an Azure SQLDB with Terraform using VS Code

The second post showed how to use Azure DevOps Task Groups to use the same build steps in multiple pipelines and build an Azure Linux SQL Server VM

Happy Automating!

Building Azure SQL Db with Terraform using Azure DevOps

In my last post I showed how to create a Resource Group and an Azure SQLDB with Terraform using Visual Studio Code to deploy.

Of course, I havent stopped there, who wants to manually run code to create things. There was a lot of install this and set up that. I would rather give the code to a build system and get it to run it. I can then even set it to automatically deploy new infrastructure when I commit some code to alter the configuration.

This scenario though is to build environments for presentations. Last time I created an Azure SQL DB and tagged it with DataInDevon (By the way you can get tickets for Data In Devon here – It is in Exeter on April 26th and 27th)

If I want to create the same environment but give it tags for a different event (This way I know when I can delete resources in Azure!) or name it differently, I can use Azure DevOps and alter the variables. I could just alter the code and commit the change and trigger a build or I could create variables and enable them to be set at the time the job is run. I use the former in “work” situations and the second for my presentations environment.

I have created a project in Azure DevOps for my Presentation Builds. I will be using GitHub to share the code that I have used. Once I clicked on pipelines, this is the page I saw

Clicking new pipeline, Azure DevOps asked me where my code was

I chose GitHub, authorised and chose the repository.

I then chose Empty Job on the next page. See the Configuration as code choice? We will come back to that later and our infrastructure as code will be deployed with a configuration as code πŸ™‚

The next page allows us to give the build a good name and choose the Agent Pool that we want to use. Azure DevOps gives 7 different hosted agents running Linux, Mac, Windows or you can download an agent and run it on your own cpus. We will use the default agent for this process.

Clicking on Agent Job 1 enables me to change the name of the Agent Job. I could also choose a different type of Agent for different jobs within the same pipeline. This would be useful for testing different OS’s for example but for right now I shall just name it properly.

State

First we need somewhere to store the state of our build so that if we re-run it the Terraform plan step will be able to work out what it needs to do. (This is not absolutely required just for building my presentation environments and this might not be the best way to achieve this but for right now this is what I do and it works.)

I click on the + and search for Azure CLI.

and click on the Add button which gives me some boxes to fill in.

I choose my Azure subscription from the first drop down and choose Inline Script from the second

Inside the script block I put the following code

# the following script will create Azure resource group, Storage account and a Storage container which will be used to store terraform state
call az group create --location $(location) --name $(TerraformStorageRG)

call az storage account create --name $(TerraformStorageAccount) --resource-group $(TerraformStorageRG) --location $(location) --sku Standard_LRS

call az storage container create --name terraform --account-name $(TerraformStorageAccount)

This will create a Resource Group, a storage account and a container and use some variables to provide the values, we will come back to the variables later.

Access Key

The next thing that we need to do is to to enable the job to be able to access the storage account. We don’t want to store that key anywhere but we can use our Azure DevOps variables and some PowerShell to gather the access key and write it to the variable when the job is running . To create the variables I clicked on the variables tab

and then added the variables with the following names TerraformStorageRG, TerraformStorageAccount and location from the previous task and TerraformStorageKey for the next task.

With those created, I go back to Tasks and add an Azure PowerShell task

I then add this code to get the access key and overwrite the variable.

# Using this script we will fetch storage key which is required in terraform file to authenticate backend stoarge account

$key=(Get-AzureRmStorageAccountKey -ResourceGroupName $(TerraformStorageRG) -AccountName $(TerraformStorageAccount)).Value[0]

Write-Host "##vso[task.setvariable variable=TerraformStorageKey]$key"

Infrastructure as Code

In my GitHub repository I now have the following folders

The manual folders hold the code from the last blog post. In the Build folder, the main.tf file is identical and looks like this.

provider "azurerm" {
    version = "=1.24.0"
}

terraform {
  backend "azurerm" {
    key = "terraform.tfstate"
  }
}

resource "azurerm_resource_group" "presentation" {
  name     = "${var.ResourceGroupName}"
  location = "${var.location}"
    tags = {
    environment = "${var.presentation}"
  }
}

resource "azurerm_sql_server" "presentation" {
  name                         = "${var.SqlServerName}"
  resource_group_name          = "${azurerm_resource_group.presentation.name}"
  location                     = "${var.location}"
  version                      = "12.0"
  administrator_login          = "__SQLServerAdminUser__"
  administrator_login_password = "__SQLServerAdminPassword__"
    tags = {
    environment = "${var.presentation}"
  }
}

resource "azurerm_sql_database" "presentation" {
  name                = "${var.SqlDatabaseName}"
  resource_group_name = "${azurerm_sql_server.presentation.resource_group_name}"
  location            = "${var.location}"
  server_name         = "${azurerm_sql_server.presentation.name}"
  edition                          = "${var.Edition}"
  requested_service_objective_name = "${var.ServiceObjective}"

  tags = {
    environment = "${var.presentation}"
  }
}

The variables.tf folder looks like this.

variable "presentation" {
    description = "The name of the presentation - used for tagging Azure resources so I know what they belong to"
    default = "__Presentation__"
}

variable "ResourceGroupName" {
  description = "The Prefix used for all resources in this example"
  default     = "__ResourceGroupName__"
}

variable "location" {
  description = "The Azure Region in which the resources in this example should exist"
  default     = "__location__"
}

variable "SqlServerName" {
  description = "The name of the Azure SQL Server to be created or to have the database on - needs to be unique, lowercase between 3 and 24 characters including the prefix"
  default     = "__SqlServerName__"
}

variable "SQLServerAdminUser" {
  description = "The name of the Azure SQL Server Admin user for the Azure SQL Database"
  default     = "__SQLServerAdminUser__"
}
variable "SQLServerAdminPassword" {
  description = "The Azure SQL Database users password"
  default     = "__SQLServerAdminPassword__"
}
variable "SqlDatabaseName" {
  description = "The name of the Azure SQL database on - needs to be unique, lowercase between 3 and 24 characters including the prefix"
  default     = "__SqlDatabaseName__"
}


variable "Edition" {
  description = "The Edition of the Database - Basic, Standard, Premium, or DataWarehouse"
  default     = "__Edition__"
}

variable "ServiceObjective" {
  description = "The Service Tier S0, S1, S2, S3, P1, P2, P4, P6, P11 and ElasticPool"
  default     = "__ServiceObjective__"
}

It is exactly the same except that the values have been replaced by the value name prefixed and suffixed with __. This will enable me to replace the values with the variables in my Azure DevOps Build job.

The backend-config.tf file will store the details of the state that will be created by the first step and use the access key that has been retrieved in the second step.

resource_group_name = "__TerraformStorageRG__"

storage_account_name = "__TerraformStorageAccount__"

container_name = "terraform"

access_key = "__TerraformStorageKey__"

I need to add the following variables to my Azure DevOps Build – Presentation, ResourceGroupName, SqlServerName, SQLServerAdminUser, SQLServerAdminPassword, SqlDatabaseName, Edition, ServiceObjective . Personally I would advise setting the password or any other sensitive values to sensitive by clicking the padlock for that variable. This will stop the value being written to the log as well as hiding it behind *’s

Because I have tagged the variables with Settable at queue time , I can set the values whenever I run a build, so if I am at a different event I can change the name.

But the build job hasn’t been set up yet. First we need to replace the values in the variables file.

Replace the Tokens

I installed the Replace Tokens Task from the marketplace and added that to the build.

I am going to use a standard naming convention for my infrastructure code files so I add Build to the Root Directory. You can also click the ellipses and navigate to a folder in your repo. In the Target Files I add *”*/*.tf” and “**/*.tfvars” which will search all of the folders (**) and only work on files with a .tf or .tfvars extension (/*.tfvars) The next step is to make sure that the replacement prefix and suffix are correct. It is hidden under Advanced

Because I often forget this step and to aid in troubleshooting I add another step to read the contents of the files and place them in the logs. I do this by adding a PowerShell step which uses

Get-ChildItem .\Build -Recurse

Get-Content .\Build\*.tf 
Get-Content .\Build\*.tfvars 

Under control options there is a check box to enable or disable the steps so once I know that everything is ok with the build I will disable this step. The output in the log of a build will look like this showing the actual values in the files. This is really useful for finding spaces :-).

Running the Terraform in Azure DevOps

With everything set up we can now run the Terraform. I installed the Terraform task from the marketplace and added a task. We are going to follow the same process as the last blog post, init, plan, apply but this time we are going to automate it πŸ™‚

First we will initialise

I put Build in the Terraform Template path. The Terraform arguments are

init -backend-config="0-backend-config.tfvars"

which will tell the Terraform to use the backend-config.tfvars file for the state. It is important to tick the Install terraform checkbox to ensure that terraform is available on the agent and to add the Azure Subscription (or Service Endpoint in a corporate environment

After the Initialise, I add the Terraform task again add Build to the target path and this time the argument is plan

Again, tick the install terraform checkbox and also the Use Azure Service Endpoint and choose the Azure Subscription.

We also need to tell the Terraform where to find the tfstate file by specifying the variables for the resource group and storage account and the container

Finally, add another Terraform task for the apply remembering to tick the install Terraform and Use Azure checkboxes

The arguments are

apply -auto-approve

This will negate the requirement for the “Only “yes” will be accepted to approve” from the manual steps post!

Build a Thing

Now we can build the environment – Clicking Save and Queue

opens this dialogue

where the variables can be filled in.

The build will be queued and clicking on the build number will open the logs

6 minutes later the job has finished

and the resources have been created.

If I want to look in the logs of the job I can click on one of the steps and take a look. This is the apply step

Do it Again For Another Presentation

So that is good, I can create my environment as I want it. Once my presentation has finished I can delete the Resource Groups. When I need to do the presentation again, I can queue another build and change the variables

The job will run

and the new resource group will be created

all ready for my next presentation πŸ™‚

This is brilliant, I can set up the same solution for different repositories for different presentations (infrastructure) and recreate the above steps.

The next post will show how to use Azure DevOps Task Groups to use the same build steps in multiple pipelines and build an Azure Linux SQL Server VM

The post after that will show how to use Azure DevOps templates to use the same build steps across many projects and build pipelines and will build a simple AKS cluster

The first post showed how to build an Azure SQLDB with Terraform using VS Code

Building Azure SQL Db with Terraform with Visual Studio Code

I have been using Terraform for the last week or so to create some infrastructure and decided to bring that knowledge back to a problem that I and others suffer from – building environments for presentations, all for the sake of doing some learning.

What is Terraform?

According to the website


HashiCorp Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned


https://www.terraform.io/

This means that I can define my infrastructure as code. If I can do that then I can reliably do the same thing again and again, at work to create environments that have the same configuration or outside of work to repeatedly build the environment I need.

Building an Azure SQL Database with Terraform

To understand how to build a thing the best place to start is the documentation https://www.terraform.io/docs . For an Azure SQL Db in the docs you will find a block of code that looks like this

resource "azurerm_resource_group" "test" {
  name     = "acceptanceTestResourceGroup1"
  location = "West US"
}

resource "azurerm_sql_server" "test" {
  name                         = "mysqlserver"
  resource_group_name          = "${azurerm_resource_group.test.name}"
  location                     = "West US"
  version                      = "12.0"
  administrator_login          = "4dm1n157r470r"
  administrator_login_password = "4-v3ry-53cr37-p455w0rd"
}

resource "azurerm_sql_database" "test" {
  name                = "mysqldatabase"
  resource_group_name = "${azurerm_resource_group.test.name}"
  location            = "West US"
  server_name         = "${azurerm_sql_server.test.name}"

  tags = {
    environment = "production"
  }
}

If you read the code, you can see that there are key value pairs defining information about the resource that is being created. Anything inside a ${} is a dynamic reference. So

  resource_group_name          = "${azurerm_resource_group.test.name}"

refers to the name property in the azure_resource_group block called test (or the name of the resource group πŸ™‚ )

Infrastructure As Code

So I can put that code into a file (name it main.tf) and alter it with the values and “run Terraform” and what I want will be created. Lets take it a step further though because I want to be able to reuse this code. Instead of hard-coding all of the values I am going to use variables. I can do this by creating another file called variables.tf which looks like

variable "presentation" {
    description = "The name of the presentation - used for tagging Azure resources so I know what they belong to"
    default = "dataindevon"
}

variable "ResourceGroupName" {
  description = "The Resource Group Name"
  default     = "beardrules"
}

variable "location" {
  description = "The Azure Region in which the resources in this example should exist"
  default     = "uksouth"
}

variable "SqlServerName" {
  description = "The name of the Azure SQL Server to be created or to have the database on - needs to be unique, lowercase between 3 and 24 characters including the prefix"
  default     = "jeremy"
}
variable "SQLServerAdminUser" {
  description = "The name of the Azure SQL Server Admin user for the Azure SQL Database"
  default     = "Beard"
}
variable "SQLServerAdminPassword" {
  description = "The Azure SQL Database users password"
  default     = "JonathanlovesR3ge%"
}

variable "SqlDatabaseName" {
  description = "The name of the Azure SQL database on - needs to be unique, lowercase between 3 and 24 characters including the prefix"
  default     = "jsdb"
}

variable "Edition" {
  description = "The Edition of the Database - Basic, Standard, Premium, or DataWarehouse"
  default     = "Standard"
}

variable "ServiceObjective" {
  description = "The Service Tier S0, S1, S2, S3, P1, P2, P4, P6, P11 and ElasticPool"
  default     = "S0"
}

and my main.tf then looks like this.

provider "azurerm" {
    version = "=1.24.0"
}

resource "azurerm_resource_group" "presentation" {
  name     = "${var.ResourceGroupName}"
  location = "${var.location}"
    tags = {
    environment = "${var.presentation}"
  }
}

resource "azurerm_sql_server" "presentation" {
  name                         = "${var.SqlServerName}"
  resource_group_name          = "${azurerm_resource_group.presentation.name}"
  location                     = "${var.location}"
  version                      = "12.0"
  administrator_login          = "${var.SQLServerAdminUser}"
  administrator_login_password = "${var.SQLServerAdminPassword}"
    tags = {
    environment = "${var.presentation}"
  }
}

resource "azurerm_sql_database" "presentation" {
  name                = "${var.SqlDatabaseName}"
  resource_group_name = "${azurerm_sql_server.presentation.resource_group_name}"
  location            = "${var.location}"
  server_name         = "${azurerm_sql_server.presentation.name}"
  edition                          = "${var.Edition}"
  requested_service_objective_name = "${var.ServiceObjective}"

  tags = {
    environment = "${var.presentation}"
  }
}

You can find these files in my GitHub Repository here.

Alright – deploy something

To deploy the code that I have written I need to download Terraform from https://www.terraform.io/downloads.html and then extract the exe to a folder in my PATH. (I chose C:\Windows). Then in Visual Studio Code I installed two extensions The Terraform Extension by Mikael Olenfalk which enables syntax highlighting and auto-completion for the tf files and the Azure Terraform extension. You will need also need Node.js from here.

With those in place I navigated to the directory holding my files in Visual Studio Code and pressed F1 and started typing azure terraform and chose Azure Terraform Init

I was then prompted to use Cloud Shell and a browser opened to login. Once I had logged in I waited until I saw this

I press F1 again and this time choose Azure Terraform plan. This is going to show me what Terraform is going to do if it applies this configuration.

You can see the what is going to be created. It is going to create 3 things

Once you have checked that the plan is what you want, press F1 again and choose Azure Terraform Apply

You are then asked to confirm that this is what you want. Only “yes” will be accepted. Then you will see the infrastructure being created

and a minute later

and Jeremy exists in the beardrules resource group

Then once I have finished with using the sqlinstance. I can press F1 again and choose Azure Terraform Destroy. Again there is a confirmation required.

and you will see the progress for 46 seconds

and all of the resources have gone.

Thats a good start. This enables me to create resources quickly and easily and keep the configuration for them safely in source control and easy to use.

In my next post I will create an Azure DevOps pipeline to deploy an AZure SQL Db withTerraform.

The post after will show how to use Azure DevOps Task Groups to use the same build steps in multiple pipelines and build an Azure Linux SQL Server VM

The post after that will show how to use Azure DevOps templates to use the same build steps across many projects and build pipelines and will build a simple AKS cluster

Adding a Folder of Scripts to GitHub with Azure Data Studio

In my last post I showed how to add a folder of scripts to GitHub using Visual Studio Code.

You can do it with Azure Data Studio as well. It’s exactly the same steps!

The blog post could end here but read on for some screen shots πŸ˜‰

Follow the previous post for details of setting up a new GitHub account

Create a repository in Github


Open the folder in Azure Data Studio with CTRL K CTRL O (Or File –> Open Folder)

Click on the Source Control icon or CTRL + SHIFT + G and then Initialize Repository

Choose the folder

Write a commit message

Say yes to the prompt. Press CTRL + ‘ to open the terminal

Navigate to the scripts folder. (I have a PSDrive set up to my Git folder)

Set-Location GIT:\ADS-Scripts\

and copy the code from the GitHub page after “…or push an existing repository from the command line”

and run it

and there are your scripts in GitHub

Make some changes to a script and it will go muddy brown

and then write a commit message. If you click on the file name in the scource control tab then you can see the changes that have been made, that are not currently tracked

Commit the change with CTRL + ENTER and then click the roundy-roundy icon (seriously anyone know its name ?) click yes on the prompt and your changes are in GitHub as well πŸ™‚

Realistically, you can use the previous post to do this with Azure Data Studio as it is built on top of Visual Studio Code but I thought it was worth showing the steps in Azure Data Studio.

Happy Source Controlling

Adding a Folder of Scripts to GitHub

Yesterday there was a tweet from Allen White.

Allen wanted to add his scripts folder to source control but didn’t have a how to do it handy. So I thought I would write one. Hopefully this will enable someone new to GitHub and to source control get a folder of scripts under source control

GitHub account

If you do not have a GitHub account go to https://github.com and create a new account

There is a funky are you a human challenge

Then you can choose your subscription

Then answer some questions (Note – you probably want to choose different answers to the what are you interested in question! I’d suggest something technical)

You need to do the email verification

Next is a very important step – Please do not skip this. You should set up 2 factor authentication. Yes even if “It’s just for me there is nothing special here”

Click your user icon top right and then settings

Then click set up two factor authentication

and either set up with an app or via SMS (I suggest the app is better)

OK – Now you have your GitHub account set up. It should have taken you less time than reading this far.

Add a Scripts Folder to GitHub

OK, Now to add a folder of scripts to a repository. Here is my folder of scripts. They can be any type of files. I would recommend copy the folder to a specific Git folder.

Open VS Code – If you don’t have VS Code, download it from
https://code.visualstudio.com/ From the welcome window choose open folder


and open your scripts folder

In VS Code click the Source Control button


and up at the top you will see a little icon – initialise repository


Click that and choose your folder

Which will then show all of the changes to the repository (adding all the new files)

Now we need to add a commit message for our changes. I generally try to write commit messages that are the reason why the change has been made as the what has been changed is made easy to see in VS Code (as well as other source control GUI tools)

Click the tick or press CTRL + ENTER and this box will pop up


I never click Always, I click yes, so that I can check if I am committing the correct files. Now we have created a local repository for our scripts folder. Our next step is to publish it to GitHub

Create a New Repository in GitHub

In Github we need to create a remote repository. Click on the New Button. Give your repository a name and decide if you want it to be Public (available for anyone to search and find) or Private (only available to people you explicitly provide access to).

This will give you a page that looks like this

Copy the code after …or push an existing repository from the command line

# make sure prompt is at right place
Set-Location C:\Git\MyScriptsFolder
# Then paste the code
git remote add origin https://github.com/SQLDBAWithABeard-Test/TheBeardsFunkyScriptFolder.git
git push -u origin master

and paste it into PowerShell in VS Code. Make sure that your prompt is at the root of your scripts folder.

Fill in your username and password and your 2FA

Then you will see a page like this

and if you refresh your GitHub page you will see

Congratulations, your code is source controlled πŸ™‚

Making Changes

Now you can make a change to a file

Commit your change

Hit the roundy-roundy icon (anyone know its proper name ?)

Press OK and your commit will be pushed to Github πŸ™‚

Yay – Source Control all the things

How to break a SQL 2019 container on my laptop

Just a very quick post today. At the weekend I blogged about creating SQL 2019 containers with named volumes enabling you to persist your data and yesterday about creating a random workload using PowerShell and a big T-SQL script.

The interesting thing about creating workload is that you can break things πŸ™‚

When I created a SQL 2019 container with the data files mapped to a directory on my laptops C Drive with a docker-compose like this

version: '3.7'

services:
    2019-CTP23:
        image: mcr.microsoft.com/mssql/server:2019-CTP2.3-ubuntu
        ports:  
          - "15591:1433"
          - "5022:5022"
        environment:
          SA_PASSWORD: "Password0!"
          ACCEPT_EULA: "Y"
        volumes: 
          - C:\MSSQL\BACKUP\KEEP:/var/opt/mssql/backups
          - C:\MSSQL\DockerFiles\datafiles:/var/opt/sqlserver
          - C:\MSSQL\DockerFiles\system:/var/opt/mssql

restore the AdventureWorks database to use the /var/opt/sqlserver directory and run a workload after a while the container stops and when you examine the logs you find

I had a whole load of these errors

2019-04-02 20:48:24.73 spid58      Error: 17053, Severity: 16, State: 1.
2019-04-02 20:48:24.73 spid58      FCB::MakePreviousWritesDurable: Operating system error (null) encountered.
2019-04-02 20:48:24.74 spid58      Error: 9001, Severity: 21, State: 1.
2019-04-02 20:48:24.74 spid58      The log for database 'AdventureWorks2014' is not available. Check the operating system error log for related error messages. Resolve any errors and restart the database.
2019-04-02 20:48:25.05 spid58      Error: 9001, Severity: 21, State: 16.
2019-04-02 20:48:25.05 spid58      The log for database 'AdventureWorks2014' is not available. Check the operating system error log for related error messages. Resolve any errors and restart the database.
2019-04-02 20:48:25.06 spid52      Error: 9001, Severity: 21, State: 16.
2019-04-02 20:48:25.06 spid52      The log for database 'AdventureWorks2014' is not available. Check the operating system error log for related error messages. Resolve any errors and restart the database.

Then some of these

019-04-02 20:55:16.26 spid53      Error: 17053, Severity: 16, State: 1.
2019-04-02 20:55:16.26 spid53      /var/opt/sqlserver/AdventureWorks2014_Data.mdf: Operating system error 31(A device attached to the system is not functioning.) encountered.

Then it went really bad

2019-04-02 20:55:16.35 spid53      Error: 3314, Severity: 21, State: 3.
2019-04-02 20:55:16.35 spid53      During undoing of a logged operation in database 'AdventureWorks2014' (page (0:0) if any), an error occurred at log record ID (65:6696:25). Typically, the specific failure is logged previously as an error in the operating system error log. Restore the database or file from
a backup, or repair the database.
2019-04-02 20:55:16.37 spid53      Database AdventureWorks2014 was shutdown due to error 3314 in routine 'XdesRMReadWrite::RollbackToLsn'. Restart for non-snapshot databases will be attempted after all connections to the database are aborted.
Restart packet created for dbid 5.
2019-04-02 20:55:16.41 spid53      Error during rollback. shutting down database (location: 1).
after that it tried to restart the database
2019-04-02 20:55:16.44 spid53      Error: 3314, Severity: 21, State: 3.
2019-04-02 20:55:16.44 spid53      During undoing of a logged operation in database 'AdventureWorks2014' (page (0:0) if any), an error occurred at log record ID (65:6696:25). Typically, the specific failure is logged previously as an error in the operating system error log. Restore the database or file from
a backup, or repair the database.
2019-04-02 20:55:16.49 spid53      Error: 3314, Severity: 21, State: 5.
2019-04-02 20:55:16.49 spid53      During undoing of a logged operation in database 'AdventureWorks2014' (page (0:0) if any), an error occurred at log record ID (65:6696:1). Typically, the specific failure
is logged previously as an error in the operating system error log. Restore the database or file from a backup, or repair the database.
Restart packet processing for dbid 5.
2019-04-02 20:55:17.04 spid52      [5]. Feature Status: PVS: 0. CTR: 0. ConcurrentPFSUpdate: 0.
2019-04-02 20:55:17.06 spid52      Starting up database 'AdventureWorks2014'.

But that caused

2019-04-02 20:55:17.90 spid76      Error: 9001, Severity: 21, State: 16.
2019-04-02 20:55:17.90 spid76      The log for database 'master' is not available. Check the operating
system error log for related error messages. Resolve any errors and restart the database.

Master eh? Now what will you do?

2019-04-02 20:55:25.55 spid52      29 transactions rolled forward in database 'AdventureWorks2014' (5:0). This is an informational message only. No user action is required.
2019-04-02 20:55:25.90 spid52      1 transactions rolled back in database 'AdventureWorks2014' (5:0). This is an informational message only. No user action is required.
2019-04-02 20:55:25.90 spid52      Recovery is writing a checkpoint in database 'AdventureWorks2014' (5). This is an informational message only. No user action is required.
2019-04-02 20:55:26.16 spid52      Recovery completed for database AdventureWorks2014 (database ID 5) in 7 second(s) (analysis 424 ms, redo 5305 ms, undo 284 ms.) This is an informational message only. No user action is required.
2019-04-02 20:55:26.21 spid52      Parallel redo is shutdown for database 'AdventureWorks2014' with worker pool size [1].
2019-04-02 20:55:26.27 spid52      CHECKDB for database 'AdventureWorks2014' finished without errors on 2018-03-24 00:38:39.313 (local time). This is an informational message only; no user action is required.

Interesting, then back to this.

2019-04-02 21:00:00.57 spid51      Error: 17053, Severity: 16, State: 1.
2019-04-02 21:00:00.57 spid51      FCB::MakePreviousWritesDurable: Operating system error (null) encountered.
2019-04-02 21:00:00.62 spid51      Error: 9001, Severity: 21, State: 1.
2019-04-02 21:00:00.62 spid51      The log for database 'AdventureWorks2014' is not available. Check the operating system error log for related error messages. Resolve any errors and restart the database.
2019-04-02 21:00:00.64 spid51      Error: 9001, Severity: 21, State: 16.

It did all that again before

This program has encountered a fatal error and cannot continue running at Tue Apr  2 21:04:08 2019
The following diagnostic information is available:

       Reason: 0x00000004
      Message: RETAIL ASSERT: Expression=(false) File=Thread.cpp Line=4643 Description=Timed out waiting for thread terminate/suspend/resume.
   Stacktrace: 000000006af30187 000000006af2836a 000000006ae4a4d1
               000000006ae48c55 000000006af6ab5e 000000006af6ac04
               00000002809528df
      Process: 7 - sqlservr
       Thread: 129 (application thread 0x1e8)
  Instance Id: 215cfcc9-8f69-4869-9a52-5aa44a415a83
     Crash Id: 53e98400-33f1-4786-98fd-484f0c8d9a7e
  Build stamp: 0e53295d0e1704ae5b221538dd6e2322cd46134e0cc32be49c887ca84cdb8c10
 Distribution: Ubuntu 16.04.6 LTS
   Processors: 2
 Total Memory: 4906205184 bytes
    Timestamp: Tue Apr  2 21:04:08 2019

Ubuntu 16.04.6 LTS
Capturing core dump and information to /var/opt/mssql/log...
/usr/bin/find: '/proc/7/task/516': No such file or directory
dmesg: read kernel buffer failed: Operation not permitted
No journal files were found.
No journal files were found.
Attempting to capture a dump with paldumper
WARNING: Capture attempt failure detected
Attempting to capture a filtered dump with paldumper
WARNING: Attempt to capture dump failed.  Reference /var/opt/mssql/log/core.sqlservr.7.temp/log/paldumper-debug.log for details
Attempting to capture a dump with gdb
WARNING: Unable to capture crash dump with GDB. You may need to
allow ptrace debugging, enable the CAP_SYS_PTRACE capability, or
run as root.

failing to capture it’s dump!! Oops πŸ™‚

I had to recreate the containers without using the named volumes and then I could run my workload πŸ™‚

Nothing particularly useful about this blog post other than an interesting look at the error log when things go wrong πŸ™‚

Generating a Workload against AdventureWorks with PowerShell

For a later blog post I have been trying to generate some workload against an AdventureWorks database.

I found this excellent blog post by Pieter Vanhove t https://blogs.technet.microsoft.com/msftpietervanhove/2016/01/08/generate-workload-on-your-azure-sql-database/ which references this 2011 post by Jonathan Kehayias t
https://www.sqlskills.com/blogs/jonathan/the-adventureworks2008r2-books-online-random-workload-generator/

Both of these run a random query in a single thread so I thought I would use PoshRSJob by Boe Prox b | t to run multiple queries at the same time πŸ™‚

To install PoshRSJob, like with any PowerShell module, you run

Install-Module -Name PoshRSJob

I downloaded AdventureWorksBOLWorkload zip from Pieters blog post and extracted to my C:\temp folder. I created a Invoke-RandomWorkload function which you can get from my functions repository in Github. The guts of the function are

    1.. $NumberOfJobs | Start-RSJob -Name "WorkLoad"  -Throttle $Throttle -ScriptBlock  {

        # Get the queries
        $Queries = Get-Content -Delimiter $Using:Delimiter -Path $Using:PathToScript 
        # Pick a Random Query from the input object 
        $Query = Get-Random -InputObject $Queries 
        # Run the Query
        Invoke-SqlCmd -ServerInstance  $Using:SqlInstance -Credential $Using:SqlCredential -Database $Using:Database -Query $Query 
        # Choose a random number of milliseconds to wait
        $a = Get-Random -Maximum 2000 -Minimum 100; 
        Start-Sleep -Milliseconds $a;     
    } 

which will created $NumberOfJobs jobs and then run $Throttle number of jobs in the background until they have all completed. Each job will run a random query from the query file using Invoke-SqlCmd. Why did I use Invoke-SqlCmd and not Invoke-DbaQuery from dbatools? dbatools creates runspaces in the background to help with logging and creating runspaces inside background jobs causes errors

Then I can run the function with

Invoke-RandomWorkload -SqlInstance $SQL2019CTP23 -SqlCredential $cred -Database AdventureWorks2014  -NumberOfJobs 1000 -Delay 10 -Throttle 10

and create a random workload. Creating lots of background jobs takes resources so when I wanted to run a longer workload I created a loop.

$x = 10
while($X -gt 0){
    Invoke-RandomWorkload -SqlInstance $SQL2019CTP23 -SqlCredential $cred -Database AdventureWorks2014  -NumberOfJobs 1000 -Delay 10 -Throttle 10
$x --
}

You can get the function here. The full code is below

# With thanks to Jonathan Kehayias and Pieter Vanhove

<#
.SYNOPSIS
Runs a random workload against a database using a sql file

.DESCRIPTION
Runs a random workload against a database using PoshRSJobs to create parallel jobs to run random 
queries from a T-SQL file by default it uses the AdventureWorksBOLWorkload.sql from Pieter Vanhove

.PARAMETER SqlInstance
The SQL instance to run the queries against

.PARAMETER SqlCredential
The SQL Credential for the Instance if required

.PARAMETER Database
The name of the database to run the queries against

.PARAMETER NumberOfJobs
The number of jobs to create - default 10

.PARAMETER Delay
The delay in seconds for the output for the running jobs - default 10

.PARAMETER Throttle
The number of parallel jobs to run at a time - default 5

.PARAMETER PathToScript
The path to the T-SQL script holding the queries - default 'C:\temp\AdventureWorksBOLWorkload\AdventureWorksBOLWorkload.sql'

.PARAMETER Delimiter
The delimiter in the T-SQL Script between the queries - default ------

.PARAMETER ShowOutput
Shows the output from the jobs

.EXAMPLE
Invoke-RandomWorkload -SqlInstance $SQL2019CTP23 -SqlCredential $cred -Database AdventureWorks2014 -NumberOfJobs 100 -Delay 10 -Throttle 10 

Runs 100 queries with a maximum of 10 at a time against the AdventureWorks2014 database on $SQL2019CTP23

.EXAMPLE
 $x = 10
 while($X -gt 0){
     Invoke-RandomWorkload -SqlInstance $SQL2019CTP23 -SqlCredential $cred -Database AdventureWorks2014  -NumberOfJobs 1000 -Delay 10 -Throttle 10
 $x --
 }

Runs 1000 queries with a maximum of 10 at a time against the AdventureWorks2014 database on $SQL2019CTP23 10 times in a loop


.NOTES
With thanks to Pieter Vanhove
https://blogs.technet.microsoft.com/msftpietervanhove/2016/01/08/generate-workload-on-your-azure-sql-database/
and
Jonathan Kehayias
The AdventureWorks2008R2 Books Online Random Workload Generator
#> function Invoke-RandomWorkload { #Requires -Module PoshRsJob #Requires -Module SQLServer Param( [string]$SqlInstance, [pscredential]$SqlCredential, [string]$Database, [int]$NumberOfJobs = 10, [int]$Delay = 10, [int]$Throttle = 5, [string]$PathToScript = 'C:\temp\AdventureWorksBOLWorkload\AdventureWorksBOLWorkload.sql', [string]$Delimiter = "------", [switch]$ShowOutput ) #Check if there are old Workload Jobs $WorkloadJobs = Get-RSJob -Name Workload if ($WorkloadJobs) { Write-Output "Removing Old WorkLoad Jobs" $WorkloadJobs |Stop-RSJob $WorkloadJobs | Remove-RSJob } Write-Output "Creating Background Jobs" 1.. $NumberOfJobs | Start-RSJob -Name "WorkLoad" -Throttle $Throttle -ScriptBlock { # Get the queries $Queries = Get-Content -Delimiter $Using:Delimiter -Path $Using:PathToScript # Pick a Random Query from the input object $Query = Get-Random -InputObject $Queries # Run the Query Invoke-SqlCmd -ServerInstance $Using:SqlInstance -Credential $Using:SqlCredential -Database $Using:Database -Query $Query # Choose a random number of milliseconds to wait $a = Get-Random -Maximum 2000 -Minimum 100; Start-Sleep -Milliseconds $a; } $runningJobs = (Get-RSJob -Name WorkLoad -State Running).Count While ($runningJobs -ne 0) { $jobs = Get-RSJob -Name WorkLoad $runningJobs = $Jobs.Where{$PSItem.State -eq 'Running'}.Count $WaitingJobs = $Jobs.Where{$PSItem.State -eq 'NotStarted'}.Count $CompletedJobs = $Jobs.Where{$PSItem.State -eq 'Completed'}.Count Write-Output "$runningJobs jobs running - $WaitingJobs jobs waiting - $CompletedJobs -jobs finished" Start-Sleep -Seconds $Delay } Write-Output "Jobs have finished" if ($ShowOutput) { Write-Output "WorkLoad Jobs Output below -" Get-RSJob -Name WorkLoad | Receive-RSJob } Write-Output "Removing Old WorkLoad Jobs" Get-RSJob -Name WorkLoad | Remove-RSJob Write-Output "Finished" }