You can download the latest insiders edition from the link above, it can be installed alongside the stable release.
To access many of the commands available use F1 to open the command palette (like many of my tips this also works in Visual Studio Code). You can then start typing to get the command that you want.
You can then hit enter with the command that you want highlighted, use the mouse or use the shortcut which is displayed to the right.
In a new notebook, you can click the drop down next to kernel and now you can see that PowerShell is available
When you choose the PowerShell kernel, you will get a prompt asking you to configure the Python installation
If you have Python already installed you can browse to the location that it is installed or you can install Python. In the bottom pane you will be able to see the progress of the installation.
When it has completed, you will see
You may also get a prompt asking if you would like to upgrade some packages
Again this will be displayed in the tasks pane
Adding PowerShell
To add PowerShell Code to the notebook click the Code button at the top of the file
or the one you can find by highlighting above or below a block
I did not have intellisense, but you can easily write your code in Azure Data Studio or Visual Studio Code and paste it in the block.
This was because he had the PowerShell extension installed and I did not (I know !!) If you find you dont have intellisense then install the PowerShell extension!
Clicking the play button (which is only visible when you hover the mouse over it) will run the code
You can clear the results from every code block using the clear results button at the top
Otherwise, you can save the results with the Notebook by saving it. This is the part that is missing from running PowerShell in the Markdown blocks in a SQL Notebook as I described here
I am looking forward to how this develops. You can find my sample PowerShell notebook (with the code results) here
Most of my writing time at the moment is devoted to Learn dbatools in a Month of Lunches which is now available but here is a short post following a question someone asked me.
How can I get the Installation Date for SQL Server on my estate into a database with dbatools ?
You can get the date that SQL Server was installed using the creation date of the NT Authority\System login using T-SQL
SELECT create_date
FROM sys.server_principals
WHERE sid = 0x010100000000000512000000
How you get the instances in your estate is going to be different per reader but here is an example using Registered Servers from my local registered servers list, you can also use a Central Management Server
Get-DbaRegisteredServer -Group local
So we can gather those instances into a variable and pass that to Get-DbaInstanceInstallDate
$SqlInstances = Get-DbaRegisteredServer -Group local
Get-DbaInstanceInstallDate -SqlInstance $SqlInstances
Add to database
To add the results of any PowerShell command to a database, you can pipe the results to Write-DbaDbTableData
This will create a table called InstallDate and put the results of the Get-DbaInstanceInstallDate command. Note – If you want to try this code, I would advise using a different database than tempdb!!
It is important to note that the table created may not have the most optimal data types and that you may want to pre-create the table.
So there you go, all the installation dates for your estate in a database table. Hope that helps you Jonny.
As well as many other things, the fantastical BDFL of dbatools Chrissy LemaireΒ @cl and myself have written enough of a chunk ofΒ Learn dbatools in a Month of Lunches that our publisherΒ Manning Publications have agreed to release it as a MEAP. Not a text book, this book is written in a fun conversational style and split up into chapters that you can read in a lunch-time.
It is impossible for me to hear MEAP and not think of this π
What is MEAP? A book can take a year or more to write, so how do you learn that hot new technology today? The answer is MEAP, the Manning Early Access Program. In MEAP, you read a book chapter-by-chapter while it’s being written and get the final eBook as soon as it’s finished. If you pre-order the pBook, you’ll get it long before it’s available in stores.
Basically, to make it easy to get and for those that like to get in early, you can order the book and get the first 4 chapters (three in reality) RIGHT NOW!! (It also means that Chrissy and I have to write the rest of book – dang still going to be busy!)
Simply head over to https://beard.media/bookblog and use the code mlsewell and you can get access to the book too.
This will also give you access to the live book.
live book
The live book is fantastic, you can read the whole book from within your browser. See the three icons that appear to the right of the book?
3 little icons (no porridge)
The left hand one enables you to bookmark an important part so that you can come back to it easily using the bookmarks link in the top right
bookmarks
The middle icon enables you to write notes for yourself, maybe ways that you can use the information or maybe comments about an awesome Italian.
Shoes
The last one is the way that you can make comments and engage us , the authors in conversation, ask questions, request clarification or wonder about Dutch data manglers
I think its down to PII
If you select a piece of text, another menu opens up
The first icon lets you highlight the text, to make it easier to find later
Hover over the highlight and you can choose different colours for different things.
or even create pretty pictures for Mathias
Mathias – Why isn’t he an MVP?
You can choose to annotate, which is sort of like highlighting and writing a note with the next icon
When you want to share a link to a particular part of the book with someone else, you can highlight part of it and click the link icon
It’s easy to start PowerShell as another user as long as you remember when to press SHIFT
Which will highlight the paragraph and open a dialogue at the bottom where you can create and copy the link.
By far the most important part for Chrissy and I is the last link. When you find something wrong you can mark it for our attention. Yes, even with Chrissy and I proof reading each others words, the fabulous proof reader ClΓ‘udio Silva (b | t) and awesome tech editor Mike Shepard (b | t) as well as many community reviewers there are still, and will continue to be, issues. So when you find them, highlight them and click the right hand most link
with with more more than than one one
This will open up as shown so that you can fill in what was wrong (Please don’t report this error again Shane b | t has beaten you to it!)
You will have noticed on social media and elsewhere that we have left some easter eggs in the book
— Rob He/Him robsewell@tech.lgbt & @counter.soci (@sqldbawithbeard) August 29, 2019
Whenever you find them or whenever you want to talk about the book on social media, please use the hashtag #dbatoolsMoL – you never know what goodies may end up in your inbox.
Oh and if you have got this far and don’t know what dbatools in a Month of Lunches is, listen to the hair and read more https://dbatools.io/meap/
I had set the Network security rules to accept connections only from my static IP using variables in the Build Pipeline. I use MobaXterm as my SSH client. Its a free download. I click on sessions
Choose a SSH session and fill in the remote host address from the portal
fill in the password and
Configuring SQL
The next task is to configure the SQL installation. Following the instructions on the Microsoft docs site I run
Luckily, you can π You can use Azure DevOps Job Templates to achieve this. There is a limitation at present, you can only use them for Build Pipelines and not Release Pipelines.
The aim of this little blog series was to have a single Build Pipeline stored as code which I can use to build any infrastructure that I want with Terraform in Azure and be able to use it anywhere
Creating a Build Pipeline Template
I created a GitHub repository to hold my Build Templates, feel free to use them as a base for your own but please don’t try and use the repo for your own builds.
There is a View YAML button. I can click this to view the YAML definition of the Build Pipeline
I copy that and paste it into a new file in my BuildTemplates repository. (I have replaced my Azure Subscription information in the public repository)
jobs:
- job: Build
pool:
name: Hosted VS2017
demands: azureps
steps:
- task: AzureCLI@1
displayName: 'Azure CLI to deploy azure storage for backend'
inputs:
azureSubscription: 'PUTYOURAZURESUBNAMEHERE'
scriptLocation: inlineScript
inlineScript: |
# the following script will create Azure resource group, Storage account and a Storage container which will be used to store terraform state
call az group create --location $(location) --name $(TerraformStorageRG)
call az storage account create --name $(TerraformStorageAccount) --resource-group $(TerraformStorageRG) --location $(location) --sku Standard_LRS
call az storage container create --name terraform --account-name $(TerraformStorageAccount)
- task: AzurePowerShell@3
displayName: 'Azure PowerShell script to get the storage key'
inputs:
azureSubscription: 'PUTYOURAZURESUBNAMEHERE'
ScriptType: InlineScript
Inline: |
# Using this script we will fetch storage key which is required in terraform file to authenticate backend stoarge account
$key=(Get-AzureRmStorageAccountKey -ResourceGroupName $(TerraformStorageRG) -AccountName $(TerraformStorageAccount)).Value[0]
Write-Host "##vso[task.setvariable variable=TerraformStorageKey]$key"
azurePowerShellVersion: LatestVersion
- task: qetza.replacetokens.replacetokens-task.replacetokens@3
displayName: 'Replace tokens in terraform file'
inputs:
rootDirectory: Build
targetFiles: |
**/*.tf
**/*.tfvars
tokenPrefix: '__'
tokenSuffix: '__'
- powershell: |
Get-ChildItem .\Build -Recurse
Get-Content .\Build\*.tf
Get-Content .\Build\*.tfvars
Get-ChildItem Env: | select Name
displayName: 'Check values in files'
enabled: false
- task: petergroenewegen.PeterGroenewegen-Xpirit-Vsts-Release-Terraform.Xpirit-Vsts-Release-Terraform.Terraform@2
displayName: 'Initialise Terraform'
inputs:
TemplatePath: Build
Arguments: 'init -backend-config="0-backend-config.tfvars"'
InstallTerraform: true
UseAzureSub: true
ConnectedServiceNameARM: 'PUTYOURAZURESUBNAMEHERE'
- task: petergroenewegen.PeterGroenewegen-Xpirit-Vsts-Release-Terraform.Xpirit-Vsts-Release-Terraform.Terraform@2
displayName: 'Plan Terraform execution'
inputs:
TemplatePath: Build
Arguments: plan
InstallTerraform: true
UseAzureSub: true
ConnectedServiceNameARM: 'PUTYOURAZURESUBNAMEHERE'
- task: petergroenewegen.PeterGroenewegen-Xpirit-Vsts-Release-Terraform.Xpirit-Vsts-Release-Terraform.Terraform@2
displayName: 'Apply Terraform'
inputs:
TemplatePath: Build
Arguments: 'apply -auto-approve'
InstallTerraform: true
UseAzureSub: true
ConnectedServiceNameARM: 'PUTYOURAZURESUBNAMEHERE'
Now I can use this yaml as configuration as code for my Build Pipeline π It can be used from any Azure DevOps project. Once you start looking at the code and the documentation for the yaml schema you can begin to write your pipelines as YAML, but sometimes it is easier to just create build pipeline or even just a job step in the browser and click the view yaml button!
Create an AKS Cluster with a SQL 2019 container using Terraform and Build templates
This time I am going to choose the Configuration as code template
I am going to give it a name and it will show me that it needs the path to the yaml file containing the build definition in the current repository.
Clicking the 3 ellipses will pop-up a file chooser and I pick the build.yaml file
The build.yaml file looks like this. The name is the USER/Repository Name and the endpoint is the name of the endpoint for the GitHub service connection in Azure DevOps. The template value is the name of the build yaml file @ the name given for the repository value.
You can find (and change) your GitHub service connection name by clicking on the cog bottom left in Azure DevOps and clicking service connections
I still need to create my variables for my Terraform template (perhaps I can now just leave those in my code?) For the AKS Cluster build right now I have to add presentation, location, ResourceGroupName, AgentPoolName, ServiceName, VMSize, agent_count
Then I click save and queue and the job starts running
If I want to edit the pipeline it looks a little different
The variables and triggers can be found under the 3 ellipses on the top right
It also defaults the trigger to automatic deployment.
It takes a bit longer to build
and when I get the Terraform code wrong and the build fails, I can just alter the code, commit it, push and a new build will start and the Terraform will work out what is built and what needs to be built!
but eventually the job finishes successfully
and the resources are built
and in Visual Studio Code with the Kubernetes extension installed I can connect to the cluster by clicking the 3 ellipses and Add Existing Cluster
I choose Azure Kubernetes Services and click next
Choose my subscription and then add the cluster
and then I can explore my cluster
I can also see the dashboard by right clicking on the cluster name and Open Dashboard
Right clicking on the service name and choosing describe
shows the external IP address, which I can put into Azure Data Studio and connect to my container
So I now I can source control my Build Job Steps and hold them in a central repository. I can make use of them in any project. This gives me much more control and saves me from repeating myself repeating myself. The disadvantage is that there is no handy warning when I change the underlying Build Repository that I will be affecting other Build Pipelines and there is no easy method to see which Build Pipelines are dependent on the build yaml file
In my last post I showed how to build an Azure DevOps Pipeline for a Terraform build of an Azure SQLDB. This will take the terraform code and build the required infrastructure.
The plan all along has been to enable me to build different environments depending on the requirement. Obviously I can repeat the steps from the last post for a new repository containing a Terraform code for a different environment but
If you are going to do something more than once Automate It
who first said this? Anyone know?
The build steps for building the Terraform are the same each time (if I keep a standard folder and naming structure) so it would be much more beneficial if I could keep them in a single place and any alterations to the process only need to be made in the one place π
A task group allows you to encapsulate a sequence of tasks, already defined in a build or a release pipeline, into a single reusable task that can be added to a build or release pipeline, just like any other tas
If you are doing this with a more complicated existing build pipeline it is important that you read the Before You Create A Task Group on the docs page. This will save you time when trying to understand why variables are not available (Another grey hair on my beard!)
Creating A Task Group
Here’s the thing, creating a task group is so easy it should be the default way you create Azure DevOps Pipelines. Let me walk you through it
I will use the Build Pipeline from the previous post. Click edit from the build page
Then CTRL and click to select all of the steps
Right Click and theres a Create Task Group button to click !
You can see that it has helpfully added the values for the parameters it requires for the location, Storage Account and the Resource Group.
Remember the grey beard hair above? We need to change those values to use the variables that we will add to the Build Pipeline using
$(VariableName)
Once you have done that click Create
This will also alter the current Build Pipeline to use the Task Group. Now we have a Task Group that we can use in any build pipeline in this project.
Using the Task Group with a new Build Pipeline to build an Azure Linux SQL VM
Lets re-use the build steps to create an Azure SQL Linux VM. First I created a new GitHub Repository for my Terraform code. Using the docs I created the Terraform to create a resource group, a Linux SQL VM, a virtual network, a subnet, a NIC for the VM, a public IP for the VM, a netwwork security group with two rules, one for SQL and one for SSH. It will look like this
The next step is to choose the repository
again we are going to select Empty job (although the next post will be about the Configuration as Code π
As before we will name the Build Pipeline and the Agent Job Step and click the + to add a new task. This time we will search for the Task Group name that we created
I need to add in the variables from the variable.tf in the code and also for the Task Group
and when I click save and queue
It runs for less than 7 minutes
and when I look in the Azure portal
and I can connect in Azure Data Studio
Altering The Task Group
You can find the Task Groups under Pipelines in your Azure DevOps project
Click on the Task Group that you have created and then you can alter, edit it if required and click save
This will warn you that any changes will affect all pipelines and task groups that are using this task group. To find out what will be affected click on references
which will show you what will be affected.
Now I can run the same build steps for any Build Pipeline and alter them all in a single place using Task Groups simplifying the administration of the Build Pipelines.
In my last post I showed how to create a Resource Group and an Azure SQLDB with Terraform using Visual Studio Code to deploy.
Of course, I havent stopped there, who wants to manually run code to create things. There was a lot of install this and set up that. I would rather give the code to a build system and get it to run it. I can then even set it to automatically deploy new infrastructure when I commit some code to alter the configuration.
This scenario though is to build environments for presentations. Last time I created an Azure SQL DB and tagged it with DataInDevon (By the way you can get tickets for Data In Devon here – It is in Exeter on April 26th and 27th)
If I want to create the same environment but give it tags for a different event (This way I know when I can delete resources in Azure!) or name it differently, I can use Azure DevOps and alter the variables. I could just alter the code and commit the change and trigger a build or I could create variables and enable them to be set at the time the job is run. I use the former in “work” situations and the second for my presentations environment.
I have created a project in Azure DevOps for my Presentation Builds. I will be using GitHub to share the code that I have used. Once I clicked on pipelines, this is the page I saw
Clicking new pipeline, Azure DevOps asked me where my code was
I chose GitHub, authorised and chose the repository.
I then chose Empty Job on the next page. See the Configuration as code choice? We will come back to that later and our infrastructure as code will be deployed with a configuration as code π
The next page allows us to give the build a good name and choose the Agent Pool that we want to use. Azure DevOps gives 7 different hosted agents running Linux, Mac, Windows or you can download an agent and run it on your own cpus. We will use the default agent for this process.
Clicking on Agent Job 1 enables me to change the name of the Agent Job. I could also choose a different type of Agent for different jobs within the same pipeline. This would be useful for testing different OS’s for example but for right now I shall just name it properly.
State
First we need somewhere to store the state of our build so that if we re-run it the Terraform plan step will be able to work out what it needs to do. (This is not absolutely required just for building my presentation environments and this might not be the best way to achieve this but for right now this is what I do and it works.)
I click on the + and search for Azure CLI.
and click on the Add button which gives me some boxes to fill in.
I choose my Azure subscription from the first drop down and choose Inline Script from the second
Inside the script block I put the following code
# the following script will create Azure resource group, Storage account and a Storage container which will be used to store terraform state
call az group create --location $(location) --name $(TerraformStorageRG)
call az storage account create --name $(TerraformStorageAccount) --resource-group $(TerraformStorageRG) --location $(location) --sku Standard_LRS
call az storage container create --name terraform --account-name $(TerraformStorageAccount)
This will create a Resource Group, a storage account and a container and use some variables to provide the values, we will come back to the variables later.
Access Key
The next thing that we need to do is to to enable the job to be able to access the storage account. We don’t want to store that key anywhere but we can use our Azure DevOps variables and some PowerShell to gather the access key and write it to the variable when the job is running . To create the variables I clicked on the variables tab
and then added the variables with the following names TerraformStorageRG, TerraformStorageAccount and location from the previous task and TerraformStorageKey for the next task.
With those created, I go back to Tasks and add an Azure PowerShell task
I then add this code to get the access key and overwrite the variable.
# Using this script we will fetch storage key which is required in terraform file to authenticate backend stoarge account
$key=(Get-AzureRmStorageAccountKey -ResourceGroupName $(TerraformStorageRG) -AccountName $(TerraformStorageAccount)).Value[0]
Write-Host "##vso[task.setvariable variable=TerraformStorageKey]$key"
variable "presentation" {
description = "The name of the presentation - used for tagging Azure resources so I know what they belong to"
default = "__Presentation__"
}
variable "ResourceGroupName" {
description = "The Prefix used for all resources in this example"
default = "__ResourceGroupName__"
}
variable "location" {
description = "The Azure Region in which the resources in this example should exist"
default = "__location__"
}
variable "SqlServerName" {
description = "The name of the Azure SQL Server to be created or to have the database on - needs to be unique, lowercase between 3 and 24 characters including the prefix"
default = "__SqlServerName__"
}
variable "SQLServerAdminUser" {
description = "The name of the Azure SQL Server Admin user for the Azure SQL Database"
default = "__SQLServerAdminUser__"
}
variable "SQLServerAdminPassword" {
description = "The Azure SQL Database users password"
default = "__SQLServerAdminPassword__"
}
variable "SqlDatabaseName" {
description = "The name of the Azure SQL database on - needs to be unique, lowercase between 3 and 24 characters including the prefix"
default = "__SqlDatabaseName__"
}
variable "Edition" {
description = "The Edition of the Database - Basic, Standard, Premium, or DataWarehouse"
default = "__Edition__"
}
variable "ServiceObjective" {
description = "The Service Tier S0, S1, S2, S3, P1, P2, P4, P6, P11 and ElasticPool"
default = "__ServiceObjective__"
}
It is exactly the same except that the values have been replaced by the value name prefixed and suffixed with __. This will enable me to replace the values with the variables in my Azure DevOps Build job.
The backend-config.tf file will store the details of the state that will be created by the first step and use the access key that has been retrieved in the second step.
I need to add the following variables to my Azure DevOps Build – Presentation, ResourceGroupName, SqlServerName, SQLServerAdminUser, SQLServerAdminPassword, SqlDatabaseName, Edition, ServiceObjective . Personally I would advise setting the password or any other sensitive values to sensitive by clicking the padlock for that variable. This will stop the value being written to the log as well as hiding it behind *’s
Because I have tagged the variables with Settable at queue time , I can set the values whenever I run a build, so if I am at a different event I can change the name.
But the build job hasn’t been set up yet. First we need to replace the values in the variables file.
Replace the Tokens
I installed the Replace Tokens Task from the marketplace and added that to the build.
I am going to use a standard naming convention for my infrastructure code files so I add Build to the Root Directory. You can also click the ellipses and navigate to a folder in your repo. In the Target Files I add *”*/*.tf” and “**/*.tfvars” which will search all of the folders (**) and only work on files with a .tf or .tfvars extension (/*.tfvars) The next step is to make sure that the replacement prefix and suffix are correct. It is hidden under Advanced
Because I often forget this step and to aid in troubleshooting I add another step to read the contents of the files and place them in the logs. I do this by adding a PowerShell step which uses
Under control options there is a check box to enable or disable the steps so once I know that everything is ok with the build I will disable this step. The output in the log of a build will look like this showing the actual values in the files. This is really useful for finding spaces :-).
Running the Terraform in Azure DevOps
With everything set up we can now run the Terraform. I installed the Terraform task from the marketplace and added a task. We are going to follow the same process as the last blog post, init, plan, apply but this time we are going to automate it π
First we will initialise
I put Build in the Terraform Template path. The Terraform arguments are
init -backend-config="0-backend-config.tfvars"
which will tell the Terraform to use the backend-config.tfvars file for the state. It is important to tick the Install terraform checkbox to ensure that terraform is available on the agent and to add the Azure Subscription (or Service Endpoint in a corporate environment
After the Initialise, I add the Terraform task again add Build to the target path and this time the argument is plan
Again, tick the install terraform checkbox and also the Use Azure Service Endpoint and choose the Azure Subscription.
We also need to tell the Terraform where to find the tfstate file by specifying the variables for the resource group and storage account and the container
Finally, add another Terraform task for the apply remembering to tick the install Terraform and Use Azure checkboxes
The arguments are
apply -auto-approve
This will negate the requirement for the “Only “yes” will be accepted to approve” from the manual steps post!
Build a Thing
Now we can build the environment – Clicking Save and Queue
opens this dialogue
where the variables can be filled in.
The build will be queued and clicking on the build number will open the logs
6 minutes later the job has finished
and the resources have been created.
If I want to look in the logs of the job I can click on one of the steps and take a look. This is the apply step
Do it Again For Another Presentation
So that is good, I can create my environment as I want it. Once my presentation has finished I can delete the Resource Groups. When I need to do the presentation again, I can queue another build and change the variables
The job will run
and the new resource group will be created
all ready for my next presentation π
This is brilliant, I can set up the same solution for different repositories for different presentations (infrastructure) and recreate the above steps.
I have been using Terraform for the last week or so to create some infrastructure and decided to bring that knowledge back to a problem that I and others suffer from – building environments for presentations, all for the sake of doing some learning.
What is Terraform?
According to the website
HashiCorp Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned
This means that I can define my infrastructure as code. If I can do that then I can reliably do the same thing again and again, at work to create environments that have the same configuration or outside of work to repeatedly build the environment I need.
If you read the code, you can see that there are key value pairs defining information about the resource that is being created. Anything inside a ${} is a dynamic reference. So
refers to the name property in the azure_resource_group block called test (or the name of the resource group π )
Infrastructure As Code
So I can put that code into a file (name it main.tf) and alter it with the values and “run Terraform” and what I want will be created. Lets take it a step further though because I want to be able to reuse this code. Instead of hard-coding all of the values I am going to use variables. I can do this by creating another file called variables.tf which looks like
variable "presentation" {
description = "The name of the presentation - used for tagging Azure resources so I know what they belong to"
default = "dataindevon"
}
variable "ResourceGroupName" {
description = "The Resource Group Name"
default = "beardrules"
}
variable "location" {
description = "The Azure Region in which the resources in this example should exist"
default = "uksouth"
}
variable "SqlServerName" {
description = "The name of the Azure SQL Server to be created or to have the database on - needs to be unique, lowercase between 3 and 24 characters including the prefix"
default = "jeremy"
}
variable "SQLServerAdminUser" {
description = "The name of the Azure SQL Server Admin user for the Azure SQL Database"
default = "Beard"
}
variable "SQLServerAdminPassword" {
description = "The Azure SQL Database users password"
default = "JonathanlovesR3ge%"
}
variable "SqlDatabaseName" {
description = "The name of the Azure SQL database on - needs to be unique, lowercase between 3 and 24 characters including the prefix"
default = "jsdb"
}
variable "Edition" {
description = "The Edition of the Database - Basic, Standard, Premium, or DataWarehouse"
default = "Standard"
}
variable "ServiceObjective" {
description = "The Service Tier S0, S1, S2, S3, P1, P2, P4, P6, P11 and ElasticPool"
default = "S0"
}
With those in place I navigated to the directory holding my files in Visual Studio Code and pressed F1 and started typing azure terraform and chose Azure Terraform Init
I was then prompted to use Cloud Shell and a browser opened to login. Once I had logged in I waited until I saw this
I press F1 again and this time choose Azure Terraform plan. This is going to show me what Terraform is going to do if it applies this configuration.
You can see the what is going to be created. It is going to create 3 things
Once you have checked that the plan is what you want, press F1 again and choose Azure Terraform Apply
You are then asked to confirm that this is what you want. Only “yes” will be accepted. Then you will see the infrastructure being created
and a minute later
and Jeremy exists in the beardrules resource group
Then once I have finished with using the sqlinstance. I can press F1 again and choose Azure Terraform Destroy. Again there is a confirmation required.
and you will see the progress for 46 seconds
and all of the resources have gone.
Thats a good start. This enables me to create resources quickly and easily and keep the configuration for them safely in source control and easy to use.
You can do it with Azure Data Studio as well. It’s exactly the same steps!
The blog post could end here but read on for some screen shots π
Follow the previous post for details of setting up a new GitHub account
Create a repository in Github
Open the folder in Azure Data Studio with CTRL K CTRL O (Or File –> Open Folder)
Click on the Source Control icon or CTRL + SHIFT + G and then Initialize Repository
Choose the folder
Write a commit message
Say yes to the prompt. Press CTRL + ‘ to open the terminal
Navigate to the scripts folder. (I have a PSDrive set up to my Git folder)
Set-Location GIT:\ADS-Scripts\
and copy the code from the GitHub page after “β¦or push an existing repository from the command line”
and run it
and there are your scripts in GitHub
Make some changes to a script and it will go muddy brown
and then write a commit message. If you click on the file name in the scource control tab then you can see the changes that have been made, that are not currently tracked
Commit the change with CTRL + ENTER and then click the roundy-roundy icon (seriously anyone know its name ?) click yes on the prompt and your changes are in GitHub as well π
Realistically, you can use the previous post to do this with Azure Data Studio as it is built on top of Visual Studio Code but I thought it was worth showing the steps in Azure Data Studio.
Allen wanted to add his scripts folder to source control but didn’t have a how to do it handy. So I thought I would write one. Hopefully this will enable someone new to GitHub and to source control get a folder of scripts under source control
GitHub account
If you do not have a GitHub account go to https://github.com and create a new account
There is a funky are you a human challenge
Then you can choose your subscription
Then answer some questions (Note – you probably want to choose different answers to the what are you interested in question! I’d suggest something technical)
You need to do the email verification
Next is a very important step – Please do not skip this. You should set up 2 factor authentication. Yes even if “It’s just for me there is nothing special here”
Click your user icon top right and then settings
Then click set up two factor authentication
and either set up with an app or via SMS (I suggest the app is better)
OK – Now you have your GitHub account set up. It should have taken you less time than reading this far.
Add a Scripts Folder to GitHub
OK, Now to add a folder of scripts to a repository. Here is my folder of scripts. They can be any type of files. I would recommend copy the folder to a specific Git folder.
Open VS Code – If you don’t have VS Code, download it from https://code.visualstudio.com/ From the welcome window choose open folder
and open your scripts folder
In VS Code click the Source Control button
and up at the top you will see a little icon – initialise repository
Click that and choose your folder
Which will then show all of the changes to the repository (adding all the new files)
Now we need to add a commit message for our changes. I generally try to write commit messages that are the reason why the change has been made as the what has been changed is made easy to see in VS Code (as well as other source control GUI tools)
Click the tick or press CTRL + ENTER and this box will pop up
I never click Always, I click yes, so that I can check if I am committing the correct files. Now we have created a local repository for our scripts folder. Our next step is to publish it to GitHub
Create a New Repository in GitHub
In Github we need to create a remote repository. Click on the New Button. Give your repository a name and decide if you want it to be Public (available for anyone to search and find) or Private (only available to people you explicitly provide access to).
This will give you a page that looks like this
Copy the code after β¦or push an existing repository from the command line
# make sure prompt is at right place
Set-Location C:\Git\MyScriptsFolder
# Then paste the code
git remote add origin https://github.com/SQLDBAWithABeard-Test/TheBeardsFunkyScriptFolder.git
git push -u origin master
and paste it into PowerShell in VS Code. Make sure that your prompt is at the root of your scripts folder.
Fill in your username and password and your 2FA
Then you will see a page like this
and if you refresh your GitHub page you will see
Congratulations, your code is source controlled π
Making Changes
Now you can make a change to a file
Commit your change
Hit the roundy-roundy icon (anyone know its proper name ?)
Press OK and your commit will be pushed to Github π