How I created PowerShell.cool using Flow, Azure SQL DB, Cognitive Services & PowerBi

Last weekend I was thinking about how to save the tweets for PowerShell Conference Europe. This annual event occurs in Hanover and this year it is on April 17-20, 2018. The agenda has just been released and you can find it on the website http://www.psconf.eu/

I ended up creating an interactive PowerBi report to which my good friend and Data Platform MVP Paul Andrew b | t added a bit of magic and I published it. The magnificent Tobias Weltner b | t who organises PSConfEU pointed the domain name http://powershell.cool at the link. It looks like this.

During the monthly #PSTweetChat

I mentioned that I need to blog about how I created it and Jeff replied

so here it is! Looking forward to seeing the comparison between the PowerShell and Devops Summit and the PowerShell Conference Europe 🙂

This is an overview of how it works

 

You will find all of the resources and the scripts to do all of the below in the GitHub repo. So clone it and navigate to the filepath

Create Database

First lets create a database. Connect to your Azure subscription

01 - subscription.png

Then set some variables

They should all make sense, take note that you need to set and uncomment the login and password and choose which IPs to allow through the firewall

Create a Resource Group

02 - resource group.png

Create a SQL Server

03 - create server.png

Create a firewall rule, I just use my own IP and add the allow azure IPs

03a - firewall rule.png

Create a database

04 - create database.png

I have used the dbatools module to run the scripts to create the database. You can get it using

Run the scripts

05 - Create Table Sproc.png

This will have created the following in Azure, you can see it in the portal

07 - portal.png

You can connect to the database in SSMS and you will see

06 - show table.png

Create Cognitive Services

Now you can create the Text Analysis Cognitive Services API

First login (if you need to) and set some variables

Then create the API and get the key

You will need to accept the prompt

08 -cognitive service

Copy the Endpoint URL as you will need it.Then save one of  the keys for the next step!

09 cognitiveservice key

 

Create the Flow

I have exported the Flow to a zip file and also the json for a PowerApp (no details about that in this post). Both are available in the Github repo. I have submitted a template but it is not available yet.

Navigate to https://flow.microsoft.com/ and sign in

Creating Connections

You will need to set up your connections. Click New Connection and search for Text

16 - import step 3.png

Click Add and fill in the Account Key and the Site URL from the steps above

17 import step 5.png

click new connection and search for SQL Server

18 - import step 6.png

Enter the SQL Server Name (value of $AzureSQLServer) , Database Name , User Name and Password from the steps above

19 - import step 7.png

Click new Connection and search for Twitter and create a connection (the authorisation pop-up may be hidden behind other windows!)

Import the Flow

If you have a premium account you can import the flow, click Import

11 - import flow.png

12 - choose import.png

and choose the import.zip from the Github Repo

13 import step 1.png

 

Click on Create as new and choose a name

14 - import step 2.png

Click select during import next to Sentiment and choose the Sentiment connection

15 impot step 3.png

Select during import for the SQL Server Connection and choose the SQL Server Connection and do the same for the Twitter Connection

20 - import stpe 8.png

Then click import

21 - imported.png

Create the flow without import

If you do not have a premium account you can still create the flow using these steps. I have created a template but it is not available at the moment. Create the connections as above and then click Create from blank.

22 - importblank.png

 

Choose the trigger When a New Tweet is posted and add a search term. You may need to choose the connection to twitter by clicking the three dots

23 - importblank 1.png

Click Add an action

24 - add action.png

search for detect and choose the Text Analytics Detect Sentiment

25 - choose sentuiment.png

Enter the name for the connection, the account key and the URL from the creation of the API above. If you forgot to copy them

26 - enter details.png

Click in the text box and choose Tweet Text

27 - choose tweet text.png

Click New Step and add an action. Search for SQL Server and choose SQL Server – Execute Stored Procedure

28 - choose sql server execute stored procedure.png

Choose the stored procedure [dbo].[InsertTweet]

29 - choose stored procedure.png

Fill in as follows

  • __PowerAppsID__         0
  • Date                                 Created At
  • Sentiment                      Score
  • Tweet                              Tweet Text
  • UserLocation                 Location
  • UserName                      Tweeted By

as shown below

30 stored procedure info.png

Give the flow a name at the top and click save flow

31 flow created.png

Connect PowerBi

Open the PSConfEU Twitter Analysis Direct.pbix from the GitHub repo in PowerBi Desktop. Click the arrow next to Edit Queries and then change data source settings

32 change data source.png

Click Change source and enter the server (value of $AzureSQLServer) and the database name. It will alert you to apply changes

33 apply changes.png

It will then pop-up with a prompt for the credentials. Choose Database and enter your credentials and click connect

34 - creds.png

and your PowerBi will be populated from the Azure SQL Database 🙂 This will fail if there are no records in the table because your flow hasn’t run yet. If it does just wait until you see some tweets and then click apply changes again.

You will probably want to alter the pictures and links etc and then yo can publish the report

Happy Twitter Analysis

Dont forget to keep an eye on your flow runs to make sure they have succeeded.

Deploying a Windows Data Science Virtual Machine to Azure with PowerShell easily

This weekend (10 December 2016), I went to Slovenia for a SQL Saturday. As always, it was an amazing event well organised by Mladen Prajdic, Dejan Sarka, and Matija Lah in a fabulous setting amongst fabulous scenery. I highly recommend it and, also, Ljubljana  is a wonderful place to be in December with all the lights and markets.

WP_20161209_19_21_06_Pro.jpg

Whilst I was there I was asked by someone if you could deploy data science virtual machines in Azure with PowerShell. I said I was sure that it could be done and agreed I would write a blog post, so here it is.

According to the Azure documentation

The Data Science Virtual Machine running on a Windows Server 2012 contains popular tools for data exploration, modeling and development activities. The main tools include Microsoft R Server Developer Edition (An enterprise ready scalable R framework) , Anaconda Python distribution, Jupyter notebooks for Python and R, Visual Studio Community Edition with Python, R and node.js tools, Power BI desktop, SQL Server 2016 Developer edition including support In-Database analytics using Microsoft R Server. It also includes open source deep learning tools like Microsoft Cognitive Toolkit (CNTK 2.0) and mxnet; ML algorithms like xgboost, Vowpal Wabbit. The Azure SDK and libraries on the VM allows you to build your applications using various services in the cloud that are part of the Cortana Analytics Suite which includes Azure Machine Learning, Azure data factory, Stream Analytics and SQL Datawarehouse, Hadoop, Data Lake, Spark and more.

I have created a function to wrap around the process to make it easier for none PowerShell  people to do this. There are a series of steps to follow below and you should be able to create a machine in about 10 minutes once you have completed the pre-requisites.

Enable Programmatically Deployment

First, an annoyance. To be able to deploy Data Science virtual machines in Azure programmatically  you first have to login to the portal and click some buttons.

In the Portal click new and then marketplace and then search for data science. Choose the Windows Data Science Machine and under the blue Create button you will see a link which says “Want to deploy programmatically? Get started” Clicking this will lead to the following blade.

set-up-programmatically

Click Enable and then save and you then move to PowerShell 🙂

Azure PowerShell Cmdlets

Follow the instructions here to install the Azure PowerShell modules. In the examples you see here I am using Windows 10 and PowerShell version 5.1.14393.479 and I installed the Azure modules using the Install-Module method

Get the script

To install a data science VM, we’ll use the New-WindowsDataScienceVM.ps1 script. In this script, I’m using version 1.2, but any version of this script published in PowerShell Gallery is fine.

To install the New-WindowsDataScienceVM script from the PowerShell gallery, type:

Install-Script New-WindowsDataScienceVM

For  more information about using the PowerShellGet cmdlets to install scripts and modules from PowerShell Gallery, read this page. The PowerShellGet modules is included in PowerShell 5.0 and later on Windows 10, but you can install PowerShellGet for PowerShell 3.0 and 4.0. If you cannot connect to the gallery or prefer not to install the module, you can also find the script on GitHub.

Login to Azure

You can login to Azure using the command

Login-AzureRMAccount

which will pop-up a prompt for you to log into Azure

login

Enable Simple Mode

The New-WindowsDataScienceVM function comes with a Simple switch parameter.

If you use -Simple, the function prompts you only for the admin username and password for the virtual machine. It creates a randomly-named, standard_DS1_v2-sized machine in the ukwest data centre with standard, locally redundant storage in a randomly named Resource Group. All of the required objects have random names, too. If that is not what you want, there is more information at the end of this post. I am considering offering a pop-up to choose location in Simple Mode. Let me know here if that would be something you would like

To create a simple data science VM, run:

New-WindowsDataScienceVM -Simple

Enter Local Admin Password

When you run the function, it prompts for a local admin username and password to log into the virtual machine. The password must have 3 of the following 1 Upper case, 1 lower case, I special character and 1 number. Don’t lose it, you will need it.

Local Admin.PNG

Grab a cuppa, creating your VM and its resources will take 5 – 10 minutes. (In my testing it reliably took between 7 and 8 minutes)  The screen will look like this

deploying.PNG

When the script has finished running you will have deployed a set of resources like this

portal

Login to the Virtual Machine

Copy and paste the correct code from the output at the end of the script to launch the RDP session and save the RDP file to your documents folder for later use.

Or you can find the Virtual machine name in the portal or by running

Get-AzureRmVM -ResourceGroupName <ResourceGroup> | Where-Object {$_.Name -like 'DSVM*'}

You can then use the code below to download a RDP file and log into the virtual machine using this code

Get-AzureRmRemoteDesktopFile -ResourceGroupName <ResourceGroup> -Name <VMName>  -LocalPath C:\WIP\DataScienceVM.rdp -Launch

rdp file.PNG

You will need to login with the local admin account you set up previously, which means that you will need to click on more choices and then the machinename\Username. In this case the machine name is DSVMZIAgd

You can copy the correct Virtual Machine name and Username from the output at the end of the script.

login screen.PNG

If you have forgotten your password, you can reset it in the Portal.

Enjoy the Data Science Virtual Machine

You are then logged in and can carry on. Once the Azure PowerShell modules and script are installed you would be able to have a machine up and running within 10 minutes.

vm-desktop

Cleaning Up

To remove the resource group and ALL resources in the resource group, including the data science VM, run:

Remove-AzureRmResourceGroup -Name <ResourceGroup>  -Force

This will remove ALL resources in that resource group, so be careful if you have deployed anything else.

Customising the Deployment

If you want to use different settings for the deployment or want to script the creation of a number of machines, you can run

Get-Help New-WindowsDataScienceVM -Full

and see all the options and further examples. Any questions please feel free to comment

Backing up to URL container name – case is important

If you use SQL Backup to URL to backup your databases to Azure blob storage remember that for the container name case is important

So

will work but

will give an (400) Bad Request Error which may not be easy to diagnose

If you are using Ola Hallengrens jobs to perform your backup then your job step will look like this

Note the @ChangeBackupType = ‘Y’ parameter which is not created by default but I think is very useful. If you have just created a database and take log backups every 15 minutes but differential (or full) every night the log backup will fail until a full backup has been taken. This parameter will check if a log backup is possible and if not take a full backup meaning that you still can keep to your RTO/RPO requirements even for newly created databases

Setting Up and Using Azure VM SQL Automated Backup (and Restore)

This weekend I was creating some Azure VMs to test and was required to use the GUI for some screenshots. I have always used my Powershell scripts described here to create my test systems and with a new job taking up a lot of my time had missed the announcement about Azure SQL Automated Backup and Azure SQL Automated Patching so was surprised to see this screen

1

I read the announcement and also the details on MSDN https://msdn.microsoft.com/en-us/library/azure/dn906091.aspx which show that this requires the SQL Server IaaS Agent. This is a default option on new virtual machines.

There are some other considerations too. It is only supported for SQL Server 2014 and Windows Server 2012 and 2012R2 at present and you can set a retention period to a maximum of 30 days but it is automated. You do not have to decide upon the backup strategy Azure will decide the frequency and type of backups dependent upon the workload of the database and some other factors such as

A full backup is taken
○ when an instance is added to use Managed backup
○ When transaction log growth is 1Gb or more
○ At least once a week
○ If the log chain is broken
○ When a database is created

A transaction log backup is taken
– If no log backup is found
– Transaction log space used is 5Mb or larger
– At least once every two hours
– Any time the transaction log backup is lagging behind a full database backup. The goal is to keep the log chain ahead of full backup.

From <https://msdn.microsoft.com/en-gb/library/dn449496(v=sql.120).aspx>

There are some restrictions
– Only database backups are supported
– System databases are not supported so you need to back those up yourself
– You can only back up to Azure storage
– Maximum backup size is 1Tb as this is the maximum size for a blob in Azure storage
– Simple recovery is not supported
– Maximum retention is 30 days – if you are required to keep your backups for longer than 30 days for regulatory or other reasons you could simply use Azure Automation to copy the files to another storage account in Azure)

How to set it up.

If you are using the GUI then you will find SQL Automated Backup in the optional config blade of the set up. You can follow the steps here to set it up. If (like me) you want to use Powershell then use the following code after you have created your Virtual Machine

$storageaccount = "<storageaccountname>"
$storageaccountkey = (Get-AzureStorageKey -StorageAccountName $storageaccount).Primary
$storagecontext = New-AzureStorageContext -StorageAccountName $storageaccount -StorageAccountKey $storageaccountkey

$encryptionpassword = (Get-Credential -message 'Backup Encryption Password' -User 'IGNOREUSER').password
$autobackupconfig = New-AzureVMSqlServerAutoBackupConfig -StorageContext $storagecontext -Enable -RetentionPeriod 10 -EnableEncryption -CertificatePassword $encryptionpassword
Get-AzureVM -ServiceName <vmservicename> -Name <vmname> | Set-AzureVMSqlServerExtension -AutoBackupSettings $autobackupconfig | Update-AzureVM

Once you have run the code, Azure will take care of the rest. Add a couple of databases to your instance and look in the storage account and you will see this

2

3

And in the automaticbackup container you will find the Certificates and master key backups

4

It will also create a credential

5

You can use the same credential to back up your system databases. If like me you use Ola Hallengrens excellent Maintenance Solution then simply change your systems backup job as follows


USE [msdb]
GO
EXEC msdb.dbo.sp_update_jobstep @job_name = 'DatabaseBackup - SYSTEM_DATABASES - FULL', @step_id=1 ,
		@command=N'sqlcmd -E -S $(ESCAPE_SQUOTE(SRVR)) -d master -Q "EXECUTE [dbo].[DatabaseBackup] @Databases = ''SYSTEM_DATABASES'', "https://myaccount.blob.core.windows.net/mycontainer"
		,  @Credential = ''AutoBackup_Credential'', @BackupType = ''FULL'', @Verify = ''Y'', @CleanupTime = NULL, @CheckSum = ''Y'', @LogToTable = ''Y''" -b'
GO

If you need to restore your database then you can use the GUI and when you choose restore you will see this screen

6

Enter your storage account and the key which you can get from the Azure portal. You will notice that the credential has already been selected, click connect and

7

There are all of your backups ready to restore to any point in time that you choose. By clicking script the T-SQL is generated which looks like this


USE [master]
BACKUP LOG [Test] TO  URL = N'https://sqlbackupstoragebeard.blob.core.windows.net/asqlvm9-mssqlserver/Test_LogBackup_2015-07-16_06-21-26.bak'
WITH  CREDENTIAL = N'AutoBackup_Credential' ,
NOFORMAT, NOINIT,  NAME = N'Test_LogBackup_2015-07-16_06-21-26',
NOSKIP, NOREWIND, NOUNLOAD,  NORECOVERY ,  STATS = 5
RESTORE DATABASE [Test] FROM  URL = N'https://sqlbackupstoragebeard.blob.core.windows.net/asqlvm9-mssqlserver/Test_b8bb98d7a235487d9789b3ee8759cf3e_20150714201240+00.bak'
WITH  CREDENTIAL = N'AutoBackup_Credential' ,  FILE = 1,  NORECOVERY,  NOUNLOAD,  STATS = 5
RESTORE LOG [Test] FROM  URL = N'https://sqlbackupstoragebeard.blob.core.windows.net/asqlvm9-mssqlserver/Test_b8bb98d7a235487d9789b3ee8759cf3e_20150714202740+00.log'
WITH  CREDENTIAL = N'AutoBackup_Credential' ,  FILE = 1,  NORECOVERY,  NOUNLOAD,  STATS = 5
RESTORE LOG [Test] FROM  URL = N'https://sqlbackupstoragebeard.blob.core.windows.net/asqlvm9-mssqlserver/Test_b8bb98d7a235487d9789b3ee8759cf3e_20150714224241+00.log'
WITH  CREDENTIAL = N'AutoBackup_Credential' ,  FILE = 1,  NORECOVERY,  NOUNLOAD,  STATS = 5
RESTORE LOG [Test] FROM  URL = N'https://sqlbackupstoragebeard.blob.core.windows.net/asqlvm9-mssqlserver/Test_b8bb98d7a235487d9789b3ee8759cf3e_20150715005741+00.log'
WITH  CREDENTIAL = N'AutoBackup_Credential' ,  FILE = 1,  NORECOVERY,  NOUNLOAD,  STATS = 5
RESTORE LOG [Test] FROM  URL = N'https://sqlbackupstoragebeard.blob.core.windows.net/asqlvm9-mssqlserver/Test_b8bb98d7a235487d9789b3ee8759cf3e_20150715031242+00.log'
WITH  CREDENTIAL = N'AutoBackup_Credential' ,  FILE = 1,  NORECOVERY,  NOUNLOAD,  STATS = 5
RESTORE LOG [Test] FROM  URL = N'https://sqlbackupstoragebeard.blob.core.windows.net/asqlvm9-mssqlserver/Test_b8bb98d7a235487d9789b3ee8759cf3e_20150715052742+00.log'
WITH  CREDENTIAL = N'AutoBackup_Credential' ,  FILE = 1,  NORECOVERY,  NOUNLOAD,  STATS = 5
RESTORE LOG [Test] FROM  URL = N'https://sqlbackupstoragebeard.blob.core.windows.net/asqlvm9-mssqlserver/Test_b8bb98d7a235487d9789b3ee8759cf3e_20150715074243+00.log'
WITH  CREDENTIAL = N'AutoBackup_Credential' ,  FILE = 1,  NORECOVERY,  NOUNLOAD,  STATS = 5
RESTORE LOG [Test] FROM  URL = N'https://sqlbackupstoragebeard.blob.core.windows.net/asqlvm9-mssqlserver/Test_b8bb98d7a235487d9789b3ee8759cf3e_20150715095743+00.log'
WITH  CREDENTIAL = N'AutoBackup_Credential' ,  FILE = 1,  NORECOVERY,  NOUNLOAD,  STATS = 5
RESTORE LOG [Test] FROM  URL = N'https://sqlbackupstoragebeard.blob.core.windows.net/asqlvm9-mssqlserver/Test_b8bb98d7a235487d9789b3ee8759cf3e_20150715121243+00.log'
WITH  CREDENTIAL = N'AutoBackup_Credential' ,  FILE = 1,  NORECOVERY,  NOUNLOAD,  STATS = 5
RESTORE LOG [Test] FROM  URL = N'https://sqlbackupstoragebeard.blob.core.windows.net/asqlvm9-mssqlserver/Test_b8bb98d7a235487d9789b3ee8759cf3e_20150716060004+00.log'
WITH  CREDENTIAL = N'AutoBackup_Credential' ,  FILE = 1,  NOUNLOAD,  STATS = 5
GO

There is an important note. Remember this when you have just set it up so that you don’t think that you have done it wrong (which is what I did!)

When you enable Automated Patching for the first time, Azure configures the SQL Server IaaS Agent in the background. During this time, the portal will not show that Automated Patching is configured. Wait several minutes for the agent to be installed, configured. After that the portal will reflect the new settings.

From <https://msdn.microsoft.com/en-us/library/azure/dn961166.aspx>

And also look out for this

8

The password I had chosen was not complex enough but the Powershell script had succeeded and not given me the warning

To set up SQL Automated Patching you follow a similar steps. The setting is again on the OS Config blade and click enable and then you can choose the frequency and duration of the patching.

It is important to remember to choose your maintenance window correctly. If you have set up your SQL VMs correctly you will have them in an availability set and be using either mirroring or Availability Groups and have the VMs set up in the same availability set to ensure availability during the underlying host patching but I had it confirmed by Principal Software Engineering Manager Sethu Srinivasan t via Microsoft PFE Arvind Shyamsundar b | t that the SQL Automated Patching is not HA aware so you will need to ensure that you set the maintenance windows on each VM to ensure that they do not overlap

Uploading a Source Folder to Azure File Storage

Azure File Storage enables you to present an Azure Storage Account to your IaaS VMs as a share using SMB. You can fid out further details here

http://azure.microsoft.com/en-gb/documentation/articles/storage-dotnet-how-to-use-files/ 

Once you have created your Azure File Storage Account and connected your Azure Virtual Machines to it, you may need to upload data from your premises into the storage to enable it to be accessed by the Virtual Machines

To accomplish this I wrote a function and called it Upload-ToAzureFileStorage

I started by creating a source folder and files to test

Then we needed to connect to the subscription, get the storage account access key and create a context to store them

The Get-AzureStorageShare  cmdlet shows the shares available for the context so we can check if the share exists

and if it doesnt exist create it using New-AzureStorageShare

For the sake only of doing it a different way we can check for existence of the directory in Azure File Storage that we are going to upload the files to like this

and if it doesnt exist create it using New-AzureStorageDirectory

Now that we have the directory created in the storage account we need to create any subfolders. First get the folders

We can then iterate through them using a foreach loop. If we do this and select the FullName property the results will be

C:\temp\TestUpload\New1
C:\temp\TestUpload\New2
C:\temp\TestUpload\New3
C:\temp\TestUpload\New4
C:\temp\TestUpload\New5
C:\temp\TestUpload\New1\list
C:\temp\TestUpload\New1\list\a
C:\temp\TestUpload\New1\list\b
C:\temp\TestUpload\New1\list\c
C:\temp\TestUpload\New1\list\d
C:\temp\TestUpload\New1\list\a\1
C:\temp\TestUpload\New1\list\a\2
C:\temp\TestUpload\New1\list\a\3
C:\temp\TestUpload\New1\list\a\4

but to create new folders we need to remove the “C:\temp\TestUpload” and replace it with the Directory name in Azure. I chose to do this as follows using the substring method and the length of the source folder path.

and tested that the results came out as I wanted

AppName\New1
AppName\New2
AppName\New3
AppName\New4
AppName\New5
AppName\New1\list
AppName\New1\list\a
AppName\New1\list\b
AppName\New1\list\c
AppName\New1\list\d
AppName\New1\list\a\1
AppName\New1\list\a\2
AppName\New1\list\a\3
AppName\New1\list\a\4

I could then create the new folders in azure using New-AzureStorageDirectory again

I followed the same process with the files

and then created the files using Set-AzureStorageFileContent this has a -Force and a -Confirm switch and I added those into my function by using a [switch] Parameter

You can download the function from the Script Center

https://gallery.technet.microsoft.com/scriptcenter/Recursively-upload-a-bfb615fe

As also, any comments or queries are welcome and obviously the internet lies so please understand and test all code you find before using it in production

A look at the SQL Assessment Intelligence Pack in Operational Insights

Operational Insights is a service that has been added in preview to Azure. It enables you to collect, combine, correlate and visualize all your machine data in one place. It can collect data from all of your machines either via SCOM or by using an agent. Once the data is collected Operational Insights has a number of Intelligence Packs which have pre-configured rules and algorithms to provide analysis in various areas including for SQL Server

http://azure.microsoft.com/en-gb/services/operational-insights/

I thought I would take a look. I have an installation of SCOM in my lab on my laptop and I read the instructions to see how to connect it to Operational Insights. (You don’t have to have a SCOM installation to use Operational insights you can make use of an agent as well just follow the steps from the page below)

http://azure.microsoft.com/en-us/trial/operational-insights-get-started/

It really is very simple

If you have an Azure subscription already you can sign into the portal and join the preview program by clicking

New –> App Services –> Operational Insights

and create a new Operational Insights Workspace.

Once you have done that, if you have an installation of SCOM 2012 you need to be running Service Pack 1 and download and install the System Center Operational Insights Connector for Operations Manager and import the MPB files into SCOM.

If you have SCOM 2012R2 the connector is already installed and to connect your SCOM to Operational Insights is very very easy as you can see on

http://azure.microsoft.com/en-us/trial/operational-insights-get-started/?step2=withaccount&step3=SCOMcustomer

  1. In the Operations Manager Console, click Administration.
  2. Under Administration, select System Center Advisor, and then click Advisor Connection.
  3. Click Register to Advisor Service.
  4. Sign in with your Microsoft or Organizational account.
  5. Choose an existing Operational Insights workspace from the drop down menu
  6. Confirm your changes.
  7. In the System Center Advisor Overview page, Under Actions, click Add a Computer/Group.
  8. Under Options, select Windows Server or All Instance Groups, and then search and add servers that you want data

That is it. No really, that is it. I was amazed how quickly I was able to get this done in my lab and it would not take very long in a large implementation of SCOM either as you will have your groups of computers defined which will make it easy to decide which groups to use. You could use a separate workspace for each type of server or split up the information per service. It really is very customisable.

Once you have done that, go and add some of the Intelligence Packs. Each intelligence pack will change the amount  and type of data that is collected. At November 23rd there are

Alert Management – for your SCOM Alerts

Change Tracking – Tracking Configuration Changes

Log Management – for event log collection and interrogation

System Update Assessment – Missing Security Updates

Malware Assessment – Status of Anti-Malware and Anti-Virus scans

Capacity Planning – Identify Capacity and Utilisation bottlenecks

SQL Assessment – The risk and health of SQL Server Environment

There are also two ‘coming soon’ Intelligence packs

AD Assessment – Risk and health of Active Directory

Security – Explore security related data and help identify security breaches

You then (if you are like me) have a period of frustration whilst you wait for all of the data to be uploaded and aggregated but once it is you sign into the Operational Insights Portal

https://preview.opinsights.azure.com and it will look like this

opsman1

There is a lot of information there. As it is on my laptop and the lab is not running all of the time and is not connected to the internet most of the time I am not surprised that there are some red parts to my assessment!!

Obviously I was interested in the SQL Assessment and I explored it a bit further

Clicking on the SQL Assessment tile takes you to a screen which shows the SQL Assessment broken down into 6 Focus areas

Security and Compliance, Availability and Business Continuity, Performance and Scalability, Upgrade, Migration and  Deployment, Operations and Monitoring and Change and Configuration Management. MSDN http://msdn.microsoft.com/en-us/library/azure/dn873967.aspx gives some more information about each one

Security and Compliance – Safeguard the reputation of your organization by defending yourself from security threats and breaches, enforcing corporate policies, and meeting technical, legal and regulatory compliance requirements.

Availability and Business Continuity – Keep your services available and your business profitable by ensuring the resiliency of your infrastructure and by having the right level of business protection in the event of a disaster.

Performance and Scalability – Help your organization to grow and innovate by ensuring that your IT environment can meet current performance requirements and can respond quickly to changing business needs.

Upgrade, Migration and Deployment – Position your IT department to be the key driver of change and innovation, by taking full advantage of new enabling technologies to unlock more business value for organizational units, workforce and customers.

Operations and Monitoring – Lower your IT maintenance budget by streamlining your IT operations and implementing a comprehensive preventative maintenance program to maximize business performance.

Change and Configuration Management – Protect the day-to-day operations of your organization and ensure that changes won’t negatively affect the business by establishing change control procedures and by tracking and auditing system configurations.

You will be able to see some dials showing you how well you are doing in each area for the servers whose data has been collected.

opsman2

Each area will have the High Priority Recommendations shown below the dial and you can click on them to see more information about those recommendations

opsman3

You can also click the dial or the see all link to enter the search area where you can customise how you wish to see the data that has been collected, this looks a bit confusing at first

opsman4

The top bar contains the search , the timescale and some buttons to save the search, view the saved searches and view the search history, all of which will be shown in the right hand column below

The left column contains a bar graph for the search and all of the filters. The middle column contains the results of the search and can be viewed in list or tabular format and exported to CSV using the button below. A little bit of experimentation will give you a better understanding of how the filtering works and how you can make use of that for your environment

By looking at the search for the Operations and Monitoring Focus Area shown above

Type:SQLAssessmentRecommendation IsRollup=true RecommendationPeriod=2014-11 FocusArea=”Operations and Monitoring” RecommendationResult=Failed | sort RecommendationWeight desc

I saw that RecommendationResult=Failed and changed it to RecommendationResult=Passed. This enabled me to see all of the Recommendations that had been passed in the Focus Area and clicking the export button downloaded a csv file. I deleted RecommendationResult=Passed from the search and that gave me all of the recommendations that made up that Focus Area

Operations and Monitoring Focus Area

Recommendation
Enable Remote Desktop on servers.
Enable Remote Desktop on virtual machines.
Ensure computers are able to download updates.
Configure event logs to overwrite or archive old events automatically.
Review event log configuration to ensure event data is retained automatically. This relates to System Logs
Review event log configuration to ensure event data is retained automatically. This relates to Application Logs

I decided then to do the same for each of the Focus Areas for the SQL Assessment Intelligence Pack

Security and Compliance Focus Area

Recommendation
Change passwords that are the same as the login name.
Remove logins with blank passwords.
LAN Manager Hash for Passwords Stored
Investigate why unsigned kernel modules were loaded.
Apply security best practices to contained databases.
Enable User Account control on all computers.
Consider disabling the xp_cmdshell extended stored procedure.
Implement Windows authentication on Microsoft Azure-hosted SQL Server deployments.
Avoid using the Local System account to run the SQL Server service.
Avoid adding users to the db_owner database role.
Ensure only essential users are added to the SQL Server sysadmin server role.
Disable SQL Server guest user in all user databases.
Avoid running SQL Server Agent jobs using highly-privileged accounts.
Configure the SQL Server Agent service to use a recommended account.
Apply Windows password policies to SQL Server logins.
Investigate failures to validate the integrity of protected files.
Investigate failures to validate kernel modules.

Availability and Business Continuity Focus Area

Recommendation
Schedule full database backups at least weekly.
Optimize your backup strategy with Microsoft Azure Blob Storage.
Avoid using the Simple database recovery model.
Ensure all installations of Windows are activated.
Investigate logical disk errors.
Reduce the maximum Kerberos access token size.
Investigate connection failures due to SSPI context errors.
Set the PAGE_VERIFY database option to CHECKSUM.
Increase free space on system drives.
Investigate a write error on a disk.
Check the network access to Active Directory domain controllers.
Review DNS configuration on non-DNS servers.
Increase free space on system drives.
Investigate memory dumps.
Increase free space on system drives.
Investigate why the computer shut down unexpectedly.
Enable dynamic DNS registration for domain-joined servers.

Performance and Scalability Focus Area

Recommendation
Increase the number of tempdb database files.
Configure the tempdb database to reduce page allocation contention.
Ensure all tempdb database files have identical initial sizes and growth increments.
Set autogrowth increments for database files and log files to fixed values rather than percentage values.
Set autogrowth increments for transaction log files to less than 1GB.
Modify auto-grow settings to use a fixed size growth increment of less than 1GB and consider enabling Instant File Initialization.
Change your Affinity Mask and Affinity I/O MASK settings to prevent conflicts.
Resolve issues caused by excessive virtual log files.
Modify the database file layout for databases larger than 1TB.
Set the AUTO_CLOSE option to OFF for frequently accessed databases.
Review memory requirements on servers with less than 4GB of physical memory installed.
Configure system SiteName properties to be dynamic.
Align the Max Degree of Parallelism option to the number of logical processors.
Align the Max Degree of Parallelism option to the number of logical processors.
Consider disabling the AUTO_SHRINK database option.
Review memory requirements on computers with high paging file use.
Ensure SQL Server does not consume memory required by other applications and system components.
Consider changing your power saving settings to optimize performance.
Increase the initial size of the tempdb database.
Review the configuration of Maximum Transfer Unit (MTU) size.
Review your paging file settings.
Review and optimize memory cache configuration.
Review the configuration of Maximum Transfer Unit (MTU) size.
Review the system processor scheduling mode.
Review network provider ordering settings.
Remove invalid entries from the PATH environment variable.
Remove network entries from the PATH environment variable.
Investigate processes that use a large number of threads.
Avoid hosting user database files on the same disk volume as tempdb database files.
Review processes with large working set sizes.
Reduce the length of the PATH environment variable.
Reduce the number of entries in the PATH environment variable.
Ensure SQL Server does not consume memory required by other applications and system components.
Enable the backup compression default configuration option.
Ensure the DNS Client service is running and is set to start automatically.
Consider compressing database tables and indexes.

Upgrade, Migration and Deployment Focus Area

Recommendation
Ensure all devices run supported operating system versions.
Ensure that the guest user is enabled in the msdb database.
Avoid using the Affinity64 Mask configuration setting in new development work.
Avoid using the Affinity Mask configuration setting in new development work.
Avoid using the Affinity I/O Mask configuration setting in new development work.
Avoid using the Allow Updates configuration option in SQL Server.
Avoid using the Allow Updates configuration option in SQL Server.
Avoid using the Affinity64 I/O Mask configuration setting in new development work.
Configure SQL Server to accept incoming connections.
Configure SQL Server instances and firewalls to allow communication over TCP/IP.

As I have no data for Change and Configuration Management I was not able to see the recommendations in my Operation Insights Workspace.

Edit: Daniele Muscetta has said in the comments that this is a bug which is being tracked

As you can see from the type and description of the recommendations above these are all areas that a DBA will be concerned about and the benefit of having all of this information gathered, pre-sorted, prioritised and presented to you in this manner will enable you to work towards a better SQL environment and track your progress. You can read more about the SQL Assessment Intelligence Pack here

http://msdn.microsoft.com/en-us/library/azure/dn873958.aspx

As well as the pre-determined queries that are built into the Intelligence pack you can search your data in any way that you require enabling you to present information about the health and risk of your SQL Environment to your team or your management with ease. The “with ease” bit is dependent on you understanding the language and structure of the search queries.

You will need to put this page into your bookmarks

http://msdn.microsoft.com/library/azure/dn873997.aspx

As it contains the syntax and definitions to search your data

A very useful page for a starter like me is

http://blogs.msdn.com/b/dmuscett/archive/2014/10/19/advisor-searches-collection.aspx

by Daniele Muscetta which has a list of useful Operational Insights search queries such as

SQL Recommendation by Computer

Type=SQLAssessmentRecommendation IsRollup=false RecommendationResult=Failed | measure count() by Computer

If you click the star to the right of the search box you will find the saved searches. For the SQL Assessment Intelligence Pack there are

Did the agent pass the prerequisite check (if not, SQL Assessment data won’t be complete)
Focus Areas
How many SQL Recommendation are affecting a Computer a SQL Instance or a Database?
How many times did each unique SQL Recommendation trigger?
SQL Assesments passed by Server
SQL Recommendation by Computer
SQL Recommendation by Database
SQL Recommendation by Instance

You can use these and you can save your own searches which show the data in a way that is valuable to you.

Overall I am impressed with this tool and can see how it can be beneficial for a DBA as well as for System Administrators. I was amazed how easy it was to set up and how quickly I was able to start manipulating the data once it had been uploaded.

Using PowerShell to get Azure Endpoint Ports

A quick blog today. I was reading this blog post about How to read the SQL Error Log and I thought I would try some of the examples. I started my Azure VM using the steps in my previous post

I ran

and saw my box had started

image

and then

image

and bingo I had my SQL Port to put in SSMS and can go and play some more with SQL

Starting My Azure SQL Server VMs with PowerShell

 

The last post about Launching Azure VMs with PowerShell made someone ask me to explain how I start my Azure VMs normally so here goes.

When I decide to write a blog post or develop and test a script or run through demos from a presentation or blog post I fire up my Azure Virtual machines with PowerShell. This is how I do it

Open PowerShell and check that I am connected to my default subscription by running Get-AzureSubscription

Note – You must have installed Windows Azure PowerShell and installed the PublishSettingsFile or used Add-AzureAccount for your subscription following the steps here

http://www.windowsazure.com/en-us/manage/install-and-configure-windows-powershell/

Then I run the following three Cmdlets

image

Get-AzureVM shows me the VMs associated with that subscription.

I then pipe to Start-AzureVM as I want to start both machines. If I only wanted one I would check that

 

returned the correct machine and then pipe that to Start-AzureVM

Once the VMs have started I use Get-AzureRemoteDesktopFile giving a local path for the rdp file and specifying –launch to run the RDP session

image

and away we go 🙂

Once I have finished simply run

image

and my machines are stopped and no longer running my credit down.

Launching Azure VM After Starting With PowerShell

So this morning I decided I was going to run through this blog post on understanding query plans http://sqlmag.com/t-sql/understanding-query-plans. I logged into my Azure Portal to check my balance and clicked start on the machine and then immediately clicked connect.

D’oh

image

Of course the RDP session wouldn’t open as the machine was not up so I went and made a coffee. Whilst doing that I thought of a way of doing it with PowerShell

image

A little Do Until loop on the PowerState Property 🙂

image

Of course if I was doing it all though PowerShell I would have done this

image

Add Adventure Works Database to Windows Azure VM

This has been an interesting journey. The Adventure Works database is frequently used in blogs and reference books and I wanted to install it in my Windows Azure Learning Lab and I also wanted to automate the process.

The easiest way is to download the Windows Azure MDF file from  http://msftdbprodsamples.codeplex.com/ jump through all the security warnings in Internet Explorer and save the file and then create the database as follows

That is the way I will do it from now on! After reading this page I tried to download the file with Powershell but it would not as I could not provide a direct link to the file. Maybe someone can help me with that. So I thought I would use my SkyDrive to hold the MDF file and map a drive on the server.

\to do this you need to add the Desktop Experience feature to the server. This can be done as follows

This will take a few minutes to install, reboot and then configure the updates before you can log back in. While it is doing this log into your SkyDrive and navigate to a folder and copy the URL to notepad

It will look something like this

https://skydrive.live.com/?lc=2137#cid=XXXXXXXXXXXXXXXX&id=CYYYYYYYYYYYYYYYY

Copy the GUID after the cid=

and write this command

I keep this in a script and pass the user and password in via Read-Host

However, if you try to copy the item from the folder you will get an error

The file size exceeds the limit allowed and cannot be saved

So you will need to alter a registry key as follows

Set-ItemProperty -Path HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters -Name FileSizeLimitInBytes -Value 4294967295

and then restart the WebClient service Then run the net use command to map the drive and copy the file with Copy-Item

But my script to auto install the Adventure Works database via Powershell once you have completed all the pre-requisites is

To be honest I don’t think I will use this method as my copy failed twice before it succeeded so I will just download the file and create the database!!

Please don’t ever trust anything you read on the internet and certainly don’t implement it on production servers without first both understanding what it will do and testing it thoroughly. This solution worked for me in my environment I hope it is of use to you in yours but I know nothing about your environment and you know little about mine