Azure SQL Linux VM – configuring SQL, installing pwsh and connecting and interacting with dbatools

In my posts about using Azure Devops to build Azure resources with Terraform, I built a Linux SQL VM. I used the Terrafrom in this GitHub repository and created this

Connecting with MobaXterm

I had set the Network security rules to accept connections only from my static IP using variables in the Build Pipeline. I use MobaXterm as my SSH client. Its a free download. I click on sessions

Choose a SSH session and fill in the remote host address from the portal

fill in the password and

Configuring SQL

The next task is to configure the SQL installation. Following the instructions on the Microsoft docs site I run

sudo systemctl stop mssql-server
sudo /opt/mssql/bin/mssql-conf set-sa-password

enter the sa password and

Now to start SQL

sudo systemctl start mssql-server

Installing pwsh

Installing PowerShell Core (pwsh) is easy with snap

sudo snap install powershell --classic

A couple of minutes of downloads and install

and pwsh is ready for use

Installing dbatools

To install dbatools from the Powershell Gallery simply run

Install-Module dbatools -Scope CurrentUser

This will prompt you to allow installing from an untrusted repository

and dbatools is ready to go

#Set a credential
$cred = Get-Credential
# Show the databases on the local instance
Get-DbaDatabase -SqlInstance localhost -SqlCredential $cred

Connecting with Azure Data Studio

I can also connect with Azure Data Studio

and connect

Just a quick little post explaining what I did 🙂

Happy Linuxing!

Using Azure DevOps Build Pipeline Templates with Terraform to build an AKS cluster

In the last few posts I have moved from building an Azure SQL DB with Terraform using VS Code to automating the build process for the Azure SQL DB using Azure DevOps Build Pipelines to using Task Groups in Azure DevOps to reuse the same Build Process and build an Azure Linux SQL VM and Network Security Group. This evolution is fantastic but Task Groups can only be used in the same Azure DevOps repository. It would be brilliant if I could use Configuration as Code for the Azure Build Pipeline and store that in a separate source control repository which can be used from any Azure DevOps Project.

Luckily, you can 😉 You can use Azure DevOps Job Templates to achieve this. There is a limitation at present, you can only use them for Build Pipelines and not Release Pipelines.

The aim of this little blog series was to have a single Build Pipeline stored as code which I can use to build any infrastructure that I want with Terraform in Azure and be able to use it anywhere

Creating a Build Pipeline Template

I created a GitHub repository to hold my Build Templates, feel free to use them as a base for your own but please don’t try and use the repo for your own builds.

The easiest way to create a Build Template is to already have a Build Pipeline. This cannot be done from a Task Group but I still have the Build Pipeline from my automating the build process for the Azure SQL DB using Azure DevOps Build Pipelines blog post.

There is a View YAML button. I can click this to view the YAML definition of the Build Pipeline

I copy that and paste it into a new file in my BuildTemplates repository. (I have replaced my Azure Subscription information in the public repository)

jobs:
- job: Build
  pool:
    name: Hosted VS2017
    demands: azureps
  steps:
  - task: AzureCLI@1
    displayName: 'Azure CLI to deploy azure storage for backend'
    inputs:
      azureSubscription: 'PUTYOURAZURESUBNAMEHERE'
      scriptLocation: inlineScript
      inlineScript: |
        # the following script will create Azure resource group, Storage account and a Storage container which will be used to store terraform state
        call az group create --location $(location) --name $(TerraformStorageRG)
        
        call az storage account create --name $(TerraformStorageAccount) --resource-group $(TerraformStorageRG) --location $(location) --sku Standard_LRS
        
        call az storage container create --name terraform --account-name $(TerraformStorageAccount)

  - task: AzurePowerShell@3
    displayName: 'Azure PowerShell script to get the storage key'
    inputs:
      azureSubscription: 'PUTYOURAZURESUBNAMEHERE'
      ScriptType: InlineScript
      Inline: |
        # Using this script we will fetch storage key which is required in terraform file to authenticate backend stoarge account
      
        $key=(Get-AzureRmStorageAccountKey -ResourceGroupName $(TerraformStorageRG) -AccountName $(TerraformStorageAccount)).Value[0]
      
        Write-Host "##vso[task.setvariable variable=TerraformStorageKey]$key"
      azurePowerShellVersion: LatestVersion

  - task: qetza.replacetokens.replacetokens-task.replacetokens@3
    displayName: 'Replace tokens in terraform file'
    inputs:
      rootDirectory: Build
      targetFiles: |
        **/*.tf
        **/*.tfvars
      tokenPrefix: '__'
      tokenSuffix: '__'

  - powershell: |
      Get-ChildItem .\Build -Recurse
    
      Get-Content .\Build\*.tf 
      Get-Content .\Build\*.tfvars 
    
      Get-ChildItem Env: | select Name
    displayName: 'Check values in files'
    enabled: false

  - task: petergroenewegen.PeterGroenewegen-Xpirit-Vsts-Release-Terraform.Xpirit-Vsts-Release-Terraform.Terraform@2
    displayName: 'Initialise Terraform'
    inputs:
      TemplatePath: Build
      Arguments: 'init -backend-config="0-backend-config.tfvars"'
      InstallTerraform: true
      UseAzureSub: true
      ConnectedServiceNameARM: 'PUTYOURAZURESUBNAMEHERE'

  - task: petergroenewegen.PeterGroenewegen-Xpirit-Vsts-Release-Terraform.Xpirit-Vsts-Release-Terraform.Terraform@2
    displayName: 'Plan Terraform execution'
    inputs:
      TemplatePath: Build
      Arguments: plan
      InstallTerraform: true
      UseAzureSub: true
      ConnectedServiceNameARM: 'PUTYOURAZURESUBNAMEHERE'

  - task: petergroenewegen.PeterGroenewegen-Xpirit-Vsts-Release-Terraform.Xpirit-Vsts-Release-Terraform.Terraform@2
    displayName: 'Apply Terraform'
    inputs:
      TemplatePath: Build
      Arguments: 'apply -auto-approve'
      InstallTerraform: true
      UseAzureSub: true
      ConnectedServiceNameARM: 'PUTYOURAZURESUBNAMEHERE'

Now I can use this yaml as configuration as code for my Build Pipeline 🙂 It can be used from any Azure DevOps project. Once you start looking at the code and the documentation for the yaml schema you can begin to write your pipelines as YAML, but sometimes it is easier to just create build pipeline or even just a job step in the browser and click the view yaml button!

Create an AKS Cluster with a SQL 2019 container using Terraform and Build templates

I have a GitHub Repository with the Terraform code to build a simple AKS cluster. This could not have been achieved without Richard Cheney’s article I am not going to explain how it all works for this blog post or some of the negatives of doing it this way. Instead lets build an Azure DevOps Build Pipeline to build it with Terraform using Configuration as Code (the yaml file)

I am going to create a new Azure DevOps Build Pipeline and as in the previous posts connect it to the GitHub Repository holding the Terraform code.

This time I am going to choose the Configuration as code template

I am going to give it a name and it will show me that it needs the path to the yaml file containing the build definition in the current repository.

Clicking the 3 ellipses will pop-up a file chooser and I pick the build.yaml file

The build.yaml file looks like this. The name is the USER/Repository Name and the endpoint is the name of the endpoint for the GitHub service connection in Azure DevOps. The template value is the name of the build yaml file @ the name given for the repository value.

resources:
  repositories:
    - repository: templates
      type: github
      name: SQLDBAWithABeard/Presentations-BuildTemplates-Private
      endpoint: SQLDBAWithABeardGitHub

jobs:
- template: AzureTerraform.yaml@templates  # Template reference

You can find (and change) your GitHub service connection name by clicking on the cog bottom left in Azure DevOps and clicking service connections

I still need to create my variables for my Terraform template (perhaps I can now just leave those in my code?) For the AKS Cluster build right now I have to add presentation, location, ResourceGroupName, AgentPoolName, ServiceName, VMSize, agent_count

Then I click save and queue and the job starts running

If I want to edit the pipeline it looks a little different

The variables and triggers can be found under the 3 ellipses on the top right

It also defaults the trigger to automatic deployment.

It takes a bit longer to build

and when I get the Terraform code wrong and the build fails, I can just alter the code, commit it, push and a new build will start and the Terraform will work out what is built and what needs to be built!

but eventually the job finishes successfully

and the resources are built

and in Visual Studio Code with the Kubernetes extension installed I can connect to the cluster by clicking the 3 ellipses and Add Existing Cluster

I choose Azure Kubernetes Services and click next

Choose my subscription and then add the cluster

and then I can explore my cluster

I can also see the dashboard by right clicking on the cluster name and Open Dashboard

Right clicking on the service name and choosing describe

shows the external IP address, which I can put into Azure Data Studio and connect to my container

So I now I can source control my Build Job Steps and hold them in a central repository. I can make use of them in any project. This gives me much more control and saves me from repeating myself repeating myself. The disadvantage is that there is no handy warning when I change the underlying Build Repository that I will be affecting other Build Pipelines and there is no easy method to see which Build Pipelines are dependent on the build yaml file

Happy Automating

Deploying a Windows Data Science Virtual Machine to Azure with PowerShell easily

This weekend (10 December 2016), I went to Slovenia for a SQL Saturday. As always, it was an amazing event well organised by Mladen Prajdic, Dejan Sarka, and Matija Lah in a fabulous setting amongst fabulous scenery. I highly recommend it and, also, Ljubljana  is a wonderful place to be in December with all the lights and markets.

WP_20161209_19_21_06_Pro.jpg

Whilst I was there I was asked by someone if you could deploy data science virtual machines in Azure with PowerShell. I said I was sure that it could be done and agreed I would write a blog post, so here it is.

According to the Azure documentation

The Data Science Virtual Machine running on a Windows Server 2012 contains popular tools for data exploration, modeling and development activities. The main tools include Microsoft R Server Developer Edition (An enterprise ready scalable R framework) , Anaconda Python distribution, Jupyter notebooks for Python and R, Visual Studio Community Edition with Python, R and node.js tools, Power BI desktop, SQL Server 2016 Developer edition including support In-Database analytics using Microsoft R Server. It also includes open source deep learning tools like Microsoft Cognitive Toolkit (CNTK 2.0) and mxnet; ML algorithms like xgboost, Vowpal Wabbit. The Azure SDK and libraries on the VM allows you to build your applications using various services in the cloud that are part of the Cortana Analytics Suite which includes Azure Machine Learning, Azure data factory, Stream Analytics and SQL Datawarehouse, Hadoop, Data Lake, Spark and more.

I have created a function to wrap around the process to make it easier for none PowerShell  people to do this. There are a series of steps to follow below and you should be able to create a machine in about 10 minutes once you have completed the pre-requisites.

Enable Programmatically Deployment

First, an annoyance. To be able to deploy Data Science virtual machines in Azure programmatically  you first have to login to the portal and click some buttons.

In the Portal click new and then marketplace and then search for data science. Choose the Windows Data Science Machine and under the blue Create button you will see a link which says “Want to deploy programmatically? Get started” Clicking this will lead to the following blade.

set-up-programmatically

Click Enable and then save and you then move to PowerShell 🙂

Azure PowerShell Cmdlets

Follow the instructions here to install the Azure PowerShell modules. In the examples you see here I am using Windows 10 and PowerShell version 5.1.14393.479 and I installed the Azure modules using the Install-Module method

Get the script

To install a data science VM, we’ll use the New-WindowsDataScienceVM.ps1 script. In this script, I’m using version 1.2, but any version of this script published in PowerShell Gallery is fine.

To install the New-WindowsDataScienceVM script from the PowerShell gallery, type:

Install-Script New-WindowsDataScienceVM

For  more information about using the PowerShellGet cmdlets to install scripts and modules from PowerShell Gallery, read this page. The PowerShellGet modules is included in PowerShell 5.0 and later on Windows 10, but you can install PowerShellGet for PowerShell 3.0 and 4.0. If you cannot connect to the gallery or prefer not to install the module, you can also find the script on GitHub.

Login to Azure

You can login to Azure using the command

Login-AzureRMAccount

which will pop-up a prompt for you to log into Azure

login

Enable Simple Mode

The New-WindowsDataScienceVM function comes with a Simple switch parameter.

If you use -Simple, the function prompts you only for the admin username and password for the virtual machine. It creates a randomly-named, standard_DS1_v2-sized machine in the ukwest data centre with standard, locally redundant storage in a randomly named Resource Group. All of the required objects have random names, too. If that is not what you want, there is more information at the end of this post. I am considering offering a pop-up to choose location in Simple Mode. Let me know here if that would be something you would like

To create a simple data science VM, run:

New-WindowsDataScienceVM -Simple

Enter Local Admin Password

When you run the function, it prompts for a local admin username and password to log into the virtual machine. The password must have 3 of the following 1 Upper case, 1 lower case, I special character and 1 number. Don’t lose it, you will need it.

Local Admin.PNG

Grab a cuppa, creating your VM and its resources will take 5 – 10 minutes. (In my testing it reliably took between 7 and 8 minutes)  The screen will look like this

deploying.PNG

When the script has finished running you will have deployed a set of resources like this

portal

Login to the Virtual Machine

Copy and paste the correct code from the output at the end of the script to launch the RDP session and save the RDP file to your documents folder for later use.

Or you can find the Virtual machine name in the portal or by running

Get-AzureRmVM -ResourceGroupName <ResourceGroup> | Where-Object {$_.Name -like 'DSVM*'}

You can then use the code below to download a RDP file and log into the virtual machine using this code

Get-AzureRmRemoteDesktopFile -ResourceGroupName <ResourceGroup> -Name <VMName>  -LocalPath C:\WIP\DataScienceVM.rdp -Launch

rdp file.PNG

You will need to login with the local admin account you set up previously, which means that you will need to click on more choices and then the machinename\Username. In this case the machine name is DSVMZIAgd

You can copy the correct Virtual Machine name and Username from the output at the end of the script.

login screen.PNG

If you have forgotten your password, you can reset it in the Portal.

Enjoy the Data Science Virtual Machine

You are then logged in and can carry on. Once the Azure PowerShell modules and script are installed you would be able to have a machine up and running within 10 minutes.

vm-desktop

Cleaning Up

To remove the resource group and ALL resources in the resource group, including the data science VM, run:

Remove-AzureRmResourceGroup -Name <ResourceGroup>  -Force

This will remove ALL resources in that resource group, so be careful if you have deployed anything else.

Customising the Deployment

If you want to use different settings for the deployment or want to script the creation of a number of machines, you can run

Get-Help New-WindowsDataScienceVM -Full

and see all the options and further examples. Any questions please feel free to comment

Backing up to URL container name – case is important

If you use SQL Backup to URL to backup your databases to Azure blob storage remember that for the container name case is important

So

BACKUP LOG [DatabaseName]
TO URL = N'https://storageaccountname.blob.core.windows.net/containername/databasename_log_dmmyyhhss.trn'
WITH CHECKSUM, NO_COMPRESSION, CREDENTIAL = N'credential'

will work but

BACKUP LOG [DatabaseName]
TO URL = N'https://storageaccountname.blob.core.windows.net/CONTAINERNAME/databasename_log_dmmyyhhss.trn'</span>
WITH CHECKSUM, NO_COMPRESSION, CREDENTIAL = N'credential'

will give an (400) Bad Request Error which may not be easy to diagnose

Msg 3271, Level 16, State 1, Line 1
A nonrecoverable I/O error occurred on file "https://storageacccountname.blob.core.windows.net/CONTAINERNAME/databasename_log_dmmyyhhss.trn':" Backup to URL received an exception from the remote endpoint. 
Exception Message: The remote server returned an error: (400) Bad Request..
Msg 3013, Level 16, State 1, Line 1
BACKUP LOG is terminating abnormally.

If you are using Ola Hallengrens jobs to perform your backup then your job step will look like this

sqlcmd -E -S $(ESCAPE_SQUOTE(SRVR)) -d DBA-Admin -Q "EXECUTE [dbo].[DatabaseBackup] @Databases = 'USER_DATABASES',&nbsp; @URL = 'https://storageaccountname.blob.core.windows.net/containername', @Credential = 'credential', @BackupType = 'LOG', @ChangeBackupType = 'Y', @Verify = 'Y', @CheckSum = 'Y', @LogToTable = 'Y'" -b

Note the @ChangeBackupType = ‘Y’ parameter which is not created by default but I think is very useful. If you have just created a database and take log backups every 15 minutes but differential (or full) every night the log backup will fail until a full backup has been taken. This parameter will check if a log backup is possible and if not take a full backup meaning that you still can keep to your RTO/RPO requirements even for newly created databases

Using Power Bi with my DBA Database

Every good DBA should have a DBA database. A place to store information about all of their instances and databases.

I have an InstanceList table which looks like this

CREATE TABLE [dbo].[InstanceList](
[InstanceID] [int] IDENTITY(1,1) NOT NULL,
[ServerName] [nvarchar](50) NOT NULL,
[InstanceName] [nvarchar](50) NOT NULL,
[Port] [int] NOT NULL,
[AG] [bit] NULL,
[Inactive] [bit] NULL CONSTRAINT [DF_InstanceList_Inactive] DEFAULT ((0)),
[Environment] [nvarchar](25) NULL,
[Location] [nvarchar](30) NULL,
CONSTRAINT [PK_InstanceList_ID] PRIMARY KEY CLUSTERED
(
[InstanceID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

I use this as the basis for all of my information gathering. By adding Server name, Instance Name , Port, Environment and Location to the table I use overnight Agent jobs to run Powershell scripts to gather information about all of the instances. This way the information is dynamic and gathered from the server, so when we add RAM and change Max memory this is updated the next time the script runs. You can also automate your installation and decommission procedures (using Powershell scripts) to add the information to the DBA database automatically

I have 4 scripts

  • ServerInfo which gathers Windows OS information such as Version and edition of the operating system, number of processors,amount of RAM, IP address, domain name etc
  • SQLInfo which gathers information about the instance such as SQL version, edition, collation, max and min memory, MAXDOP , service accounts and start modes, default file locations etc
  • Database information such as size, data usage, index usage, last backup dates, owner and many more
  • Agent Job which gathers the outcome of the jobs that have run, their names, category into two tables one for a server rollup and one for details about each job

Recently I have received a lot of requests for information from various sources, auditors asking about encryption and backup policies, Project managers asking about database and sql versions, compliance asking about numbers of Windows 2003 servers or SQL 2005 servers, system teams asking which serves in a particular location can be turned off at which time dependant on which system they are supporting for a power down

Before we had the DBA database holding all of the information about the instances we would have struggled to be able to compile this information and when I saw Power Bi was released to GA I thought that it would be a good place to start to learn about it. By using data that I understood and answering questions that I knew the format of the answer I could be more confident about experimenting – ie. if I know I have 100 servers then any result for servers that exceeds that is incorrect

I have never been a BI guy, I claim no expertise in the correct methods of manipulating the data. There may very well be better methods of achieving these results and if there please feel free to comment below so that I can improve my knowledge and keep on learning

All data shown in the examples below has been generated from real-life data but all identifiable data has been altered or removed. I have no servers in Bolton, it is where I am from originally!!

I downloaded Power BI Desktop from powerbi.com and ran the installer and the first screen you see is this one

1

I then clicked on Get Data

2

And then SQL Server and filled in the details for my DBA Database and clicked connect

3

I used my current Windows credentials

4

It then asked me which tables I wanted to load so I said all of them 🙂

5

Once I had loaded the data I looked at the queries and renamed some of the columns to make more sense to me. I also created some calculated columns by clicking New Column

I created a relative date column using this code from Chris Webb http://blog.crossjoin.co.uk/2013/01/24/building-relative-date-reports-in-powerpivot/

Relative Date Offset=INT([Date] – TODAY()
Relative Date=IF([Relative Date Offset]=0
, "Today"
, "Today " & IF([Relative Date Offset]>0, "+", "") & [Relative Date Offset])

This will enable me to show data for the last day

I also did the same for days of the week

DayOfWeek = CONCATENATE(WEEKDAY('Info AgentJobDetail'[LastRunTime],2),FORMAT('InfoAgentJobDetail'[LastRunTime]," -dddd"))

Because I struggled to show the information about the Operating system I also created two columns for OS name and OS edition by adding columns as shown below

Operating System Version = SWITCH('Info ServerOSInfo'[OperatingSystem], "Microsoft Windows Server 2012 Datacenter", "Server 2012",
"Microsoft Windows Server 2012 Standard","Server 2012",
"Microsoft Windows Server 2012 R2 Datacenter", "Server 2012 R2",
"Microsoft Windows Server 2008 R2 Standard", "Server 2008 R2",
"Microsoft Windows Server 2008 R2", "Server 2008 R2",
"Microsoft Windows Server 2008 R2 Enterprise", "Server 2008 R2",
"Microsoft® Windows Server® 2008 Standard", "Server 2008",
"Microsoft® Windows Server® 2008 Enterprise","Server 2008",
"Microsoft(R) Windows(R) Server 2003, Standard Edition", "Server 2003",
"Microsoft(R) Windows(R) Server 2003, Enterprise Edition", "Server 2003",
"Microsoft Windows 2000 Server", "Server 2000",
"Unknown")

And

Operating System Edition = SWITCH('Info ServerOSInfo'[OperatingSystem], "Microsoft Windows Server 2012 Datacenter", "DataCenter",
"Microsoft Windows Server 2012 Standard","Standard",
"Microsoft Windows Server 2012 R2 Datacenter", "DataCenter",
"Microsoft Windows Server 2008 R2 Standard", "Standard",
"Microsoft Windows Server 2008 R2 Enterprise", "Enterprise",
"Microsoft® Windows Server® 2008 Standard", "Standard",
"Microsoft® Windows Server® 2008 Enterprise","Enterprise",
"Microsoft(R) Windows(R) Server 2003, Standard Edition", "Standard",
"Microsoft(R) Windows(R) Server 2003, Enterprise Edition", "Enterprise",
"Microsoft Windows 2000 Server", "Server 2000",
"Unknown")

Then I started to play with the data.

This is probably not how a professional would phrase it but I would say that if you don’t know how to use a new application be brave and give it a try.

OBVIOUSLY you are a PROFESSIONAL DBA and will not do anything that would endanger production, use a backup of your database and work locally if you need to.

The first thing I wanted to know was how many servers I had by operating system, how many by SQL version and the location of them so that I could answer the questions I had been asked. I had already written a query to get the correct information to give to the requestors so I knew the correct answers which was also an advantage. I did this like this

I expanded the Info ServerOSInfo query and dragged the ServerName field to the report which created a table of names

6

I then changed the ServerName values to Count

7

I then dragged the calculated column Operating System Version to the table

8

If I click on the table and then donut chart in the visualisations it changes to

9

So you can quickly see how you want the data displayed

I then decided to look at the number of SQL 2005 instances that I had and as I had relationships between SQLInfo and Instancelist and Clients I could build a more dynamic report.

I created a donut chart with SQLVersion as the legend and InstanceID as the values and a table of SQLVersion, ServerName and Instance Name. I also created a card that was count of InstanceID

10

Now it starts getting really useful. If I want to know how many SQL 2005 instances I have I simply click on SQL2005 in the donut chart and the rest of the report changes

11

This is very cool and I hope you can see how useful this could be and how brilliant it would be to enable relevant people within the organisation the ability to look at that report and answer their own questions.

Lets take it to the next step. I have a location column in the InstanceList table which comprises of town names. If I choose a map and drag that column to the Location field and set Values and Color Saturation to the Count of InstanceID

12

and create two tables one of client with a count of instanceid and one location with a count of instance id I can do this

13

Look at how it dynamically changes as you click on the data labels – This is very cool and makes me smile every time!! I altered the colour saturation colours to make it easier to see. Now if I am asked about SQL 2005 servers I can quickly click on SQL 2005 and

14

I can see that there are 32 instances, most are in Southampton, and which clients they support

If I click a location rather than SQL version the report alters like so

15

So you can simply pass the report file to your colleagues to enable them to use it or you can publish it to Powerbi.com. I am not going to go into any detail about the costs or licensing etc I will just say it is as easy as clicking publish. If you wish to have the information automatically refreshed there are some more steps that you would need to go through which are detailed here which enable you to connect your on-premise database to Powerbi using the data management gateway, alternatively you can simply refresh the data in the report and then publish it and replace the existing report.

Once the report is in powerbi.com you can enable Q and A on the data. This is some kind of supernatural mystical magical query language which enables you to query your data with natural language and will alter the results as you type and even cope with (deliberate for screenshot) spelling mistakes 🙂

16

I also created a report for my Agent Jobs to enable me to quickly and easily see which Jobs have failed in the last day

17

I did this by filtering the report by Relative Date Offset greater than -1 (today) and isenabled = True and Outcome = Failed

There are many many more ways I can see this being useful and I hope I have given you some ideas and encouraged you to try for yourself and find out more

I have written further posts about this

Using Power Bi with my DBA Database

Populating My DBA Database for Power Bi with PowerShell – Server Info

Populating My DBA Database for Power Bi with PowerShell – SQL Info

Populating My DBA Database for Power Bi with PowerShell – Databases

Power Bi, PowerShell and SQL Agent Jobs

Setting Up and Using Azure VM SQL Automated Backup (and Restore)

This weekend I was creating some Azure VMs to test and was required to use the GUI for some screenshots. I have always used my Powershell scripts described here to create my test systems and with a new job taking up a lot of my time had missed the announcement about Azure SQL Automated Backup and Azure SQL Automated Patching so was surprised to see this screen

1

I read the announcement and also the details on MSDN https://msdn.microsoft.com/en-us/library/azure/dn906091.aspx which show that this requires the SQL Server IaaS Agent. This is a default option on new virtual machines.

There are some other considerations too. It is only supported for SQL Server 2014 and Windows Server 2012 and 2012R2 at present and you can set a retention period to a maximum of 30 days but it is automated. You do not have to decide upon the backup strategy Azure will decide the frequency and type of backups dependent upon the workload of the database and some other factors such as

A full backup is taken
○ when an instance is added to use Managed backup
○ When transaction log growth is 1Gb or more
○ At least once a week
○ If the log chain is broken
○ When a database is created

A transaction log backup is taken
– If no log backup is found
– Transaction log space used is 5Mb or larger
– At least once every two hours
– Any time the transaction log backup is lagging behind a full database backup. The goal is to keep the log chain ahead of full backup.

From <https://msdn.microsoft.com/en-gb/library/dn449496(v=sql.120).aspx>

There are some restrictions
– Only database backups are supported
– System databases are not supported so you need to back those up yourself
– You can only back up to Azure storage
– Maximum backup size is 1Tb as this is the maximum size for a blob in Azure storage
– Simple recovery is not supported
– Maximum retention is 30 days – if you are required to keep your backups for longer than 30 days for regulatory or other reasons you could simply use Azure Automation to copy the files to another storage account in Azure)

How to set it up.

If you are using the GUI then you will find SQL Automated Backup in the optional config blade of the set up. You can follow the steps here to set it up. If (like me) you want to use Powershell then use the following code after you have created your Virtual Machine

$storageaccount = "<storageaccountname>"
$storageaccountkey = (Get-AzureStorageKey -StorageAccountName $storageaccount).Primary
$storagecontext = New-AzureStorageContext -StorageAccountName $storageaccount -StorageAccountKey $storageaccountkey

$encryptionpassword = (Get-Credential -message 'Backup Encryption Password' -User 'IGNOREUSER').password
$autobackupconfig = New-AzureVMSqlServerAutoBackupConfig -StorageContext $storagecontext -Enable -RetentionPeriod 10 -EnableEncryption -CertificatePassword $encryptionpassword
Get-AzureVM -ServiceName <vmservicename> -Name <vmname> | Set-AzureVMSqlServerExtension -AutoBackupSettings $autobackupconfig | Update-AzureVM

Once you have run the code, Azure will take care of the rest. Add a couple of databases to your instance and look in the storage account and you will see this

2

3

And in the automaticbackup container you will find the Certificates and master key backups

4

It will also create a credential

5

You can use the same credential to back up your system databases. If like me you use Ola Hallengrens excellent Maintenance Solution then simply change your systems backup job as follows


USE [msdb]
GO
EXEC msdb.dbo.sp_update_jobstep @job_name = 'DatabaseBackup - SYSTEM_DATABASES - FULL', @step_id=1 ,
		@command=N'sqlcmd -E -S $(ESCAPE_SQUOTE(SRVR)) -d master -Q "EXECUTE [dbo].[DatabaseBackup] @Databases = ''SYSTEM_DATABASES'', "https://myaccount.blob.core.windows.net/mycontainer"
		,  @Credential = ''AutoBackup_Credential'', @BackupType = ''FULL'', @Verify = ''Y'', @CleanupTime = NULL, @CheckSum = ''Y'', @LogToTable = ''Y''" -b'
GO

If you need to restore your database then you can use the GUI and when you choose restore you will see this screen

6

Enter your storage account and the key which you can get from the Azure portal. You will notice that the credential has already been selected, click connect and

7

There are all of your backups ready to restore to any point in time that you choose. By clicking script the T-SQL is generated which looks like this


USE [master]
BACKUP LOG [Test] TO  URL = N'https://sqlbackupstoragebeard.blob.core.windows.net/asqlvm9-mssqlserver/Test_LogBackup_2015-07-16_06-21-26.bak'
WITH  CREDENTIAL = N'AutoBackup_Credential' ,
NOFORMAT, NOINIT,  NAME = N'Test_LogBackup_2015-07-16_06-21-26',
NOSKIP, NOREWIND, NOUNLOAD,  NORECOVERY ,  STATS = 5
RESTORE DATABASE [Test] FROM  URL = N'https://sqlbackupstoragebeard.blob.core.windows.net/asqlvm9-mssqlserver/Test_b8bb98d7a235487d9789b3ee8759cf3e_20150714201240+00.bak'
WITH  CREDENTIAL = N'AutoBackup_Credential' ,  FILE = 1,  NORECOVERY,  NOUNLOAD,  STATS = 5
RESTORE LOG [Test] FROM  URL = N'https://sqlbackupstoragebeard.blob.core.windows.net/asqlvm9-mssqlserver/Test_b8bb98d7a235487d9789b3ee8759cf3e_20150714202740+00.log'
WITH  CREDENTIAL = N'AutoBackup_Credential' ,  FILE = 1,  NORECOVERY,  NOUNLOAD,  STATS = 5
RESTORE LOG [Test] FROM  URL = N'https://sqlbackupstoragebeard.blob.core.windows.net/asqlvm9-mssqlserver/Test_b8bb98d7a235487d9789b3ee8759cf3e_20150714224241+00.log'
WITH  CREDENTIAL = N'AutoBackup_Credential' ,  FILE = 1,  NORECOVERY,  NOUNLOAD,  STATS = 5
RESTORE LOG [Test] FROM  URL = N'https://sqlbackupstoragebeard.blob.core.windows.net/asqlvm9-mssqlserver/Test_b8bb98d7a235487d9789b3ee8759cf3e_20150715005741+00.log'
WITH  CREDENTIAL = N'AutoBackup_Credential' ,  FILE = 1,  NORECOVERY,  NOUNLOAD,  STATS = 5
RESTORE LOG [Test] FROM  URL = N'https://sqlbackupstoragebeard.blob.core.windows.net/asqlvm9-mssqlserver/Test_b8bb98d7a235487d9789b3ee8759cf3e_20150715031242+00.log'
WITH  CREDENTIAL = N'AutoBackup_Credential' ,  FILE = 1,  NORECOVERY,  NOUNLOAD,  STATS = 5
RESTORE LOG [Test] FROM  URL = N'https://sqlbackupstoragebeard.blob.core.windows.net/asqlvm9-mssqlserver/Test_b8bb98d7a235487d9789b3ee8759cf3e_20150715052742+00.log'
WITH  CREDENTIAL = N'AutoBackup_Credential' ,  FILE = 1,  NORECOVERY,  NOUNLOAD,  STATS = 5
RESTORE LOG [Test] FROM  URL = N'https://sqlbackupstoragebeard.blob.core.windows.net/asqlvm9-mssqlserver/Test_b8bb98d7a235487d9789b3ee8759cf3e_20150715074243+00.log'
WITH  CREDENTIAL = N'AutoBackup_Credential' ,  FILE = 1,  NORECOVERY,  NOUNLOAD,  STATS = 5
RESTORE LOG [Test] FROM  URL = N'https://sqlbackupstoragebeard.blob.core.windows.net/asqlvm9-mssqlserver/Test_b8bb98d7a235487d9789b3ee8759cf3e_20150715095743+00.log'
WITH  CREDENTIAL = N'AutoBackup_Credential' ,  FILE = 1,  NORECOVERY,  NOUNLOAD,  STATS = 5
RESTORE LOG [Test] FROM  URL = N'https://sqlbackupstoragebeard.blob.core.windows.net/asqlvm9-mssqlserver/Test_b8bb98d7a235487d9789b3ee8759cf3e_20150715121243+00.log'
WITH  CREDENTIAL = N'AutoBackup_Credential' ,  FILE = 1,  NORECOVERY,  NOUNLOAD,  STATS = 5
RESTORE LOG [Test] FROM  URL = N'https://sqlbackupstoragebeard.blob.core.windows.net/asqlvm9-mssqlserver/Test_b8bb98d7a235487d9789b3ee8759cf3e_20150716060004+00.log'
WITH  CREDENTIAL = N'AutoBackup_Credential' ,  FILE = 1,  NOUNLOAD,  STATS = 5
GO

There is an important note. Remember this when you have just set it up so that you don’t think that you have done it wrong (which is what I did!)

When you enable Automated Patching for the first time, Azure configures the SQL Server IaaS Agent in the background. During this time, the portal will not show that Automated Patching is configured. Wait several minutes for the agent to be installed, configured. After that the portal will reflect the new settings.

From <https://msdn.microsoft.com/en-us/library/azure/dn961166.aspx>

And also look out for this

8

The password I had chosen was not complex enough but the Powershell script had succeeded and not given me the warning

To set up SQL Automated Patching you follow a similar steps. The setting is again on the OS Config blade and click enable and then you can choose the frequency and duration of the patching.

It is important to remember to choose your maintenance window correctly. If you have set up your SQL VMs correctly you will have them in an availability set and be using either mirroring or Availability Groups and have the VMs set up in the same availability set to ensure availability during the underlying host patching but I had it confirmed by Principal Software Engineering Manager Sethu Srinivasan t via Microsoft PFE Arvind Shyamsundar b | t that the SQL Automated Patching is not HA aware so you will need to ensure that you set the maintenance windows on each VM to ensure that they do not overlap

Uploading a Source Folder to Azure File Storage

This blog has been migrated to blog.robsewell.com

Azure File Storage enables you to present an Azure Storage Account to your IaaS VMs as a share using SMB. You can fid out further details here

http://azure.microsoft.com/en-gb/documentation/articles/storage-dotnet-how-to-use-files/ 

Once you have created your Azure File Storage Account and connected your Azure Virtual Machines to it, you may need to upload data from your premises into the storage to enable it to be accessed by the Virtual Machines

To accomplish this I wrote a function and called it Upload-ToAzureFileStorage

I started by creating a source folder and files to test

New-Item -Path C:\temp\TestUpload\New1 -ItemType Directory
New-Item -Path C:\temp\TestUpload\New2 -ItemType Directory
New-Item -Path C:\temp\TestUpload\New3 -ItemType Directory
New-Item -Path C:\temp\TestUpload\New4 -ItemType Directory
New-Item -Path C:\temp\TestUpload\New5 -ItemType Directory
New-Item -Path C:\temp\TestUpload\New1\list -ItemType Directory
New-Item -Path C:\temp\TestUpload\New1\list\a -ItemType Directory
New-Item -Path C:\temp\TestUpload\New1\list\b -ItemType Directory
New-Item -Path C:\temp\TestUpload\New1\list\c -ItemType Directory
New-Item -Path C:\temp\TestUpload\New1\list\d -ItemType Directory
New-Item -Path C:\temp\TestUpload\New1\list\a\1 -ItemType Directory
New-Item -Path C:\temp\TestUpload\New1\list\a\2 -ItemType Directory
New-Item -Path C:\temp\TestUpload\New1\list\a\3 -ItemType Directory
New-Item -Path C:\temp\TestUpload\New1\list\a\4 -ItemType Directory

New-Item -Path C:\temp\TestUpload\New1\file.txt -ItemType File
New-Item -Path C:\temp\TestUpload\New2\file.txt -ItemType File
New-Item -Path C:\temp\TestUpload\New3\file.txt -ItemType File
New-Item -Path C:\temp\TestUpload\New4\file.txt -ItemType File
New-Item -Path C:\temp\TestUpload\New5\file.txt -ItemType File
New-Item -Path C:\temp\TestUpload\New1\list\file.txt -ItemType File
New-Item -Path C:\temp\TestUpload\New1\list\a\file.txt -ItemType File
New-Item -Path C:\temp\TestUpload\New1\list\a\1\file.txt -ItemType File
New-Item -Path C:\temp\TestUpload\New1\list\a\2\file.txt -ItemType File
New-Item -Path C:\temp\TestUpload\New1\list\a\3\file.txt -ItemType File
New-Item -Path C:\temp\TestUpload\New1\list\a\4\file.txt -ItemType File

Then we needed to connect to the subscription, get the storage account access key and create a context to store them

#Select Azure Subscription
Select-AzureSubscription -SubscriptionName $AzureSubscriptionName

# Get the Storage Account Key
$StorageAccountKey = (Get-AzureStorageKey -StorageAccountName $StorageAccountName).Primary

# create a context for account and key
$ctx=New-AzureStorageContext $StorageAccountName $StorageAccountKey

The Get-AzureStorageShare  cmdlet shows the shares available for the context so we can check if the share exists

$S = Get-AzureStorageShare -Context $ctx -ErrorAction SilentlyContinue|Where-Object {$_.Name -eq $AzureShare}

and if it doesnt exist create it using New-AzureStorageShare

$s = New-AzureStorageShare $AzureShare -Context $ctx

For the sake only of doing it a different way we can check for existence of the directory in Azure File Storage that we are going to upload the files to like this

$d = Get-AzureStorageFile -Share $s -ErrorAction SilentlyContinue|select Name

if ($d.Name -notcontains $AzureDirectory)

and if it doesnt exist create it using New-AzureStorageDirectory

$d = New-AzureStorageDirectory -Share $s -Path $AzureDirectory

Now that we have the directory created in the storage account we need to create any subfolders. First get the folders

# get all the folders in the source directory
$Folders = Get-ChildItem -Path $Source -Directory -Recurse

We can then iterate through them using a foreach loop. If we do this and select the FullName property the results will be

C:\temp\TestUpload\New1
C:\temp\TestUpload\New2
C:\temp\TestUpload\New3
C:\temp\TestUpload\New4
C:\temp\TestUpload\New5
C:\temp\TestUpload\New1\list
C:\temp\TestUpload\New1\list\a
C:\temp\TestUpload\New1\list\b
C:\temp\TestUpload\New1\list\c
C:\temp\TestUpload\New1\list\d
C:\temp\TestUpload\New1\list\a\1
C:\temp\TestUpload\New1\list\a\2
C:\temp\TestUpload\New1\list\a\3
C:\temp\TestUpload\New1\list\a\4

but to create new folders we need to remove the “C:\temp\TestUpload” and replace it with the Directory name in Azure. I chose to do this as follows using the substring method and the length of the source folder path.

foreach($Folder in $Folders)
 {
 $f = ($Folder.FullName).Substring(($source.Length))
 $Path = $AzureDirectory + $f

and tested that the results came out as I wanted

AppName\New1
AppName\New2
AppName\New3
AppName\New4
AppName\New5
AppName\New1\list
AppName\New1\list\a
AppName\New1\list\b
AppName\New1\list\c
AppName\New1\list\d
AppName\New1\list\a\1
AppName\New1\list\a\2
AppName\New1\list\a\3
AppName\New1\list\a\4

I could then create the new folders in azure using New-AzureStorageDirectory again

New-AzureStorageDirectory -Share $s -Path $Path -ErrorAction SilentlyContinue

I followed the same process with the files

$files = Get-ChildItem -Path $Source -Recurse -File</pre>
<pre>foreach($File in $Files)
 {
 $f = ($file.FullName).Substring(($Source.Length))
 $Path = $AzureDirectory + $f

and then created the files using Set-AzureStorageFileContent this has a -Force and a -Confirm switch and I added those into my function by using a [switch] Parameter

#upload the files to the storage

 if($Confirm)
 {
 Set-AzureStorageFileContent -Share $s -Source $File.FullName -Path $Path -Confirm
 }
 else
 {
 Set-AzureStorageFileContent -Share $s -Source $File.FullName -Path $Path -Force
 }

You can download the function from the Script Center

https://gallery.technet.microsoft.com/scriptcenter/Recursively-upload-a-bfb615fe

As also, any comments or queries are welcome and obviously the internet lies so please understand and test all code you find before using it in production

A look at the SQL Assessment Intelligence Pack in Operational Insights

Operational Insights is a service that has been added in preview to Azure. It enables you to collect, combine, correlate and visualize all your machine data in one place. It can collect data from all of your machines either via SCOM or by using an agent. Once the data is collected Operational Insights has a number of Intelligence Packs which have pre-configured rules and algorithms to provide analysis in various areas including for SQL Server

http://azure.microsoft.com/en-gb/services/operational-insights/

I thought I would take a look. I have an installation of SCOM in my lab on my laptop and I read the instructions to see how to connect it to Operational Insights. (You don’t have to have a SCOM installation to use Operational insights you can make use of an agent as well just follow the steps from the page below)

http://azure.microsoft.com/en-us/trial/operational-insights-get-started/

It really is very simple

If you have an Azure subscription already you can sign into the portal and join the preview program by clicking

New –> App Services –> Operational Insights

and create a new Operational Insights Workspace.

Once you have done that, if you have an installation of SCOM 2012 you need to be running Service Pack 1 and download and install the System Center Operational Insights Connector for Operations Manager and import the MPB files into SCOM.

If you have SCOM 2012R2 the connector is already installed and to connect your SCOM to Operational Insights is very very easy as you can see on

http://azure.microsoft.com/en-us/trial/operational-insights-get-started/?step2=withaccount&step3=SCOMcustomer

  1. In the Operations Manager Console, click Administration.
  2. Under Administration, select System Center Advisor, and then click Advisor Connection.
  3. Click Register to Advisor Service.
  4. Sign in with your Microsoft or Organizational account.
  5. Choose an existing Operational Insights workspace from the drop down menu
  6. Confirm your changes.
  7. In the System Center Advisor Overview page, Under Actions, click Add a Computer/Group.
  8. Under Options, select Windows Server or All Instance Groups, and then search and add servers that you want data

That is it. No really, that is it. I was amazed how quickly I was able to get this done in my lab and it would not take very long in a large implementation of SCOM either as you will have your groups of computers defined which will make it easy to decide which groups to use. You could use a separate workspace for each type of server or split up the information per service. It really is very customisable.

Once you have done that, go and add some of the Intelligence Packs. Each intelligence pack will change the amount  and type of data that is collected. At November 23rd there are

Alert Management – for your SCOM Alerts

Change Tracking – Tracking Configuration Changes

Log Management – for event log collection and interrogation

System Update Assessment – Missing Security Updates

Malware Assessment – Status of Anti-Malware and Anti-Virus scans

Capacity Planning – Identify Capacity and Utilisation bottlenecks

SQL Assessment – The risk and health of SQL Server Environment

There are also two ‘coming soon’ Intelligence packs

AD Assessment – Risk and health of Active Directory

Security – Explore security related data and help identify security breaches

You then (if you are like me) have a period of frustration whilst you wait for all of the data to be uploaded and aggregated but once it is you sign into the Operational Insights Portal

https://preview.opinsights.azure.com and it will look like this

opsman1

There is a lot of information there. As it is on my laptop and the lab is not running all of the time and is not connected to the internet most of the time I am not surprised that there are some red parts to my assessment!!

Obviously I was interested in the SQL Assessment and I explored it a bit further

Clicking on the SQL Assessment tile takes you to a screen which shows the SQL Assessment broken down into 6 Focus areas

Security and Compliance, Availability and Business Continuity, Performance and Scalability, Upgrade, Migration and  Deployment, Operations and Monitoring and Change and Configuration Management. MSDN http://msdn.microsoft.com/en-us/library/azure/dn873967.aspx gives some more information about each one

Security and Compliance – Safeguard the reputation of your organization by defending yourself from security threats and breaches, enforcing corporate policies, and meeting technical, legal and regulatory compliance requirements.

Availability and Business Continuity – Keep your services available and your business profitable by ensuring the resiliency of your infrastructure and by having the right level of business protection in the event of a disaster.

Performance and Scalability – Help your organization to grow and innovate by ensuring that your IT environment can meet current performance requirements and can respond quickly to changing business needs.

Upgrade, Migration and Deployment – Position your IT department to be the key driver of change and innovation, by taking full advantage of new enabling technologies to unlock more business value for organizational units, workforce and customers.

Operations and Monitoring – Lower your IT maintenance budget by streamlining your IT operations and implementing a comprehensive preventative maintenance program to maximize business performance.

Change and Configuration Management – Protect the day-to-day operations of your organization and ensure that changes won’t negatively affect the business by establishing change control procedures and by tracking and auditing system configurations.

You will be able to see some dials showing you how well you are doing in each area for the servers whose data has been collected.

opsman2

Each area will have the High Priority Recommendations shown below the dial and you can click on them to see more information about those recommendations

opsman3

You can also click the dial or the see all link to enter the search area where you can customise how you wish to see the data that has been collected, this looks a bit confusing at first

opsman4

The top bar contains the search , the timescale and some buttons to save the search, view the saved searches and view the search history, all of which will be shown in the right hand column below

The left column contains a bar graph for the search and all of the filters. The middle column contains the results of the search and can be viewed in list or tabular format and exported to CSV using the button below. A little bit of experimentation will give you a better understanding of how the filtering works and how you can make use of that for your environment

By looking at the search for the Operations and Monitoring Focus Area shown above

Type:SQLAssessmentRecommendation IsRollup=true RecommendationPeriod=2014-11 FocusArea=”Operations and Monitoring” RecommendationResult=Failed | sort RecommendationWeight desc

I saw that RecommendationResult=Failed and changed it to RecommendationResult=Passed. This enabled me to see all of the Recommendations that had been passed in the Focus Area and clicking the export button downloaded a csv file. I deleted RecommendationResult=Passed from the search and that gave me all of the recommendations that made up that Focus Area

Operations and Monitoring Focus Area

Recommendation
Enable Remote Desktop on servers.
Enable Remote Desktop on virtual machines.
Ensure computers are able to download updates.
Configure event logs to overwrite or archive old events automatically.
Review event log configuration to ensure event data is retained automatically. This relates to System Logs
Review event log configuration to ensure event data is retained automatically. This relates to Application Logs

I decided then to do the same for each of the Focus Areas for the SQL Assessment Intelligence Pack

Security and Compliance Focus Area

Recommendation
Change passwords that are the same as the login name.
Remove logins with blank passwords.
LAN Manager Hash for Passwords Stored
Investigate why unsigned kernel modules were loaded.
Apply security best practices to contained databases.
Enable User Account control on all computers.
Consider disabling the xp_cmdshell extended stored procedure.
Implement Windows authentication on Microsoft Azure-hosted SQL Server deployments.
Avoid using the Local System account to run the SQL Server service.
Avoid adding users to the db_owner database role.
Ensure only essential users are added to the SQL Server sysadmin server role.
Disable SQL Server guest user in all user databases.
Avoid running SQL Server Agent jobs using highly-privileged accounts.
Configure the SQL Server Agent service to use a recommended account.
Apply Windows password policies to SQL Server logins.
Investigate failures to validate the integrity of protected files.
Investigate failures to validate kernel modules.

Availability and Business Continuity Focus Area

Recommendation
Schedule full database backups at least weekly.
Optimize your backup strategy with Microsoft Azure Blob Storage.
Avoid using the Simple database recovery model.
Ensure all installations of Windows are activated.
Investigate logical disk errors.
Reduce the maximum Kerberos access token size.
Investigate connection failures due to SSPI context errors.
Set the PAGE_VERIFY database option to CHECKSUM.
Increase free space on system drives.
Investigate a write error on a disk.
Check the network access to Active Directory domain controllers.
Review DNS configuration on non-DNS servers.
Increase free space on system drives.
Investigate memory dumps.
Increase free space on system drives.
Investigate why the computer shut down unexpectedly.
Enable dynamic DNS registration for domain-joined servers.

Performance and Scalability Focus Area

Recommendation
Increase the number of tempdb database files.
Configure the tempdb database to reduce page allocation contention.
Ensure all tempdb database files have identical initial sizes and growth increments.
Set autogrowth increments for database files and log files to fixed values rather than percentage values.
Set autogrowth increments for transaction log files to less than 1GB.
Modify auto-grow settings to use a fixed size growth increment of less than 1GB and consider enabling Instant File Initialization.
Change your Affinity Mask and Affinity I/O MASK settings to prevent conflicts.
Resolve issues caused by excessive virtual log files.
Modify the database file layout for databases larger than 1TB.
Set the AUTO_CLOSE option to OFF for frequently accessed databases.
Review memory requirements on servers with less than 4GB of physical memory installed.
Configure system SiteName properties to be dynamic.
Align the Max Degree of Parallelism option to the number of logical processors.
Align the Max Degree of Parallelism option to the number of logical processors.
Consider disabling the AUTO_SHRINK database option.
Review memory requirements on computers with high paging file use.
Ensure SQL Server does not consume memory required by other applications and system components.
Consider changing your power saving settings to optimize performance.
Increase the initial size of the tempdb database.
Review the configuration of Maximum Transfer Unit (MTU) size.
Review your paging file settings.
Review and optimize memory cache configuration.
Review the configuration of Maximum Transfer Unit (MTU) size.
Review the system processor scheduling mode.
Review network provider ordering settings.
Remove invalid entries from the PATH environment variable.
Remove network entries from the PATH environment variable.
Investigate processes that use a large number of threads.
Avoid hosting user database files on the same disk volume as tempdb database files.
Review processes with large working set sizes.
Reduce the length of the PATH environment variable.
Reduce the number of entries in the PATH environment variable.
Ensure SQL Server does not consume memory required by other applications and system components.
Enable the backup compression default configuration option.
Ensure the DNS Client service is running and is set to start automatically.
Consider compressing database tables and indexes.

Upgrade, Migration and Deployment Focus Area

Recommendation
Ensure all devices run supported operating system versions.
Ensure that the guest user is enabled in the msdb database.
Avoid using the Affinity64 Mask configuration setting in new development work.
Avoid using the Affinity Mask configuration setting in new development work.
Avoid using the Affinity I/O Mask configuration setting in new development work.
Avoid using the Allow Updates configuration option in SQL Server.
Avoid using the Allow Updates configuration option in SQL Server.
Avoid using the Affinity64 I/O Mask configuration setting in new development work.
Configure SQL Server to accept incoming connections.
Configure SQL Server instances and firewalls to allow communication over TCP/IP.

As I have no data for Change and Configuration Management I was not able to see the recommendations in my Operation Insights Workspace.

Edit: Daniele Muscetta has said in the comments that this is a bug which is being tracked

As you can see from the type and description of the recommendations above these are all areas that a DBA will be concerned about and the benefit of having all of this information gathered, pre-sorted, prioritised and presented to you in this manner will enable you to work towards a better SQL environment and track your progress. You can read more about the SQL Assessment Intelligence Pack here

http://msdn.microsoft.com/en-us/library/azure/dn873958.aspx

As well as the pre-determined queries that are built into the Intelligence pack you can search your data in any way that you require enabling you to present information about the health and risk of your SQL Environment to your team or your management with ease. The “with ease” bit is dependent on you understanding the language and structure of the search queries.

You will need to put this page into your bookmarks

http://msdn.microsoft.com/library/azure/dn873997.aspx

As it contains the syntax and definitions to search your data

A very useful page for a starter like me is

http://blogs.msdn.com/b/dmuscett/archive/2014/10/19/advisor-searches-collection.aspx

by Daniele Muscetta which has a list of useful Operational Insights search queries such as

SQL Recommendation by Computer

Type=SQLAssessmentRecommendation IsRollup=false RecommendationResult=Failed | measure count() by Computer

If you click the star to the right of the search box you will find the saved searches. For the SQL Assessment Intelligence Pack there are

Did the agent pass the prerequisite check (if not, SQL Assessment data won’t be complete)
Focus Areas
How many SQL Recommendation are affecting a Computer a SQL Instance or a Database?
How many times did each unique SQL Recommendation trigger?
SQL Assesments passed by Server
SQL Recommendation by Computer
SQL Recommendation by Database
SQL Recommendation by Instance

You can use these and you can save your own searches which show the data in a way that is valuable to you.

Overall I am impressed with this tool and can see how it can be beneficial for a DBA as well as for System Administrators. I was amazed how easy it was to set up and how quickly I was able to start manipulating the data once it had been uploaded.

Using PowerShell to get Azure Endpoint Ports

A quick blog today. I was reading this blog post about How to read the SQL Error Log and I thought I would try some of the examples. I started my Azure VM using the steps in my previous post

I ran

Get-AzureVM -ServiceName TheBestBeard -Name Fade2black

and saw my box had started

image

and then

Get-AzureVM -ServiceName TheBestBeard -Name Fade2black|Get-AzureEndpoint |Format-Table -AutoSize

image

and bingo I had my SQL Port to put in SSMS and can go and play some more with SQL

Starting My Azure SQL Server VMs with PowerShell

 

The last post about Launching Azure VMs with PowerShell made someone ask me to explain how I start my Azure VMs normally so here goes.

When I decide to write a blog post or develop and test a script or run through demos from a presentation or blog post I fire up my Azure Virtual machines with PowerShell. This is how I do it

Open PowerShell and check that I am connected to my default subscription by running Get-AzureSubscription

Note – You must have installed Windows Azure PowerShell and installed the PublishSettingsFile or used Add-AzureAccount for your subscription following the steps here

http://www.windowsazure.com/en-us/manage/install-and-configure-windows-powershell/

Then I run the following three Cmdlets

image

Get-AzureVM shows me the VMs associated with that subscription.

I then pipe to Start-AzureVM as I want to start both machines. If I only wanted one I would check that

Get-AzureVM -name Fade2Black -ServiceName TheBestBeard

 

returned the correct machine and then pipe that to Start-AzureVM

Once the VMs have started I use Get-AzureRemoteDesktopFile giving a local path for the rdp file and specifying –launch to run the RDP session

image

and away we go 🙂

Once I have finished simply run

image

and my machines are stopped and no longer running my credit down.