Giving Back – #TSQL2sday

T-SQL Tuesday, which was started by Adam Machanic (blog|twitter) and is now starting its 6th year, is hosted by a different person each month. The host selects the theme, and then the blogging begins. Worldwide, on the second Tuesday of the month (all day, based on GMT time), bloggers attend this party by blogging about the theme. This month it is hosted by Wayne Sheffield blog|twitter and in the spirit of the holiday season it is about giving. This is my post on giving back, in the little ways as well as the bigger ones, how you can give back and why it not only benefits others but also yourself

What’s the SQL Family?

The SQL Family (or SQL community call it what you will) is a fabulous place to be. Full of many wonderful, talented, passionate and generous people. Every method of interaction that you wish for can be found if you look. Ever wanted to know how others do it? or Does this happen in other shops? or I wish I had access to someone who knows about ‘insert new shiny thing here’?

I guess that that is how I joined. I had no peers in my shop. I had no one to turn to to ask questions or get advice. I had no support and I turned to the internet. Now we all know that the internet lies. It is full of accidental and deliberate mistruths, of part information and downright bad advice. You have to be careful where you go and who you trust. I gradually found myself going back to the same resources and from those I found out about PASS and user groups

I am in the UK. I found the list of UK SQL User Groups

You can find more here both local and virtual

User Groups

I found a user group near me and went along to a meeting not knowing what to expect. I found a group of people like me willing to give up their time to learn and share knowledge. A wide range of people from DBAs, Developers, BI Professionals and SysAdmins. Working for International multi regional companies looking after many hundreds of SQL Servers to single sysadmins looking after the whole kit and caboodle and everything in between. A wealth and breadth of knowledge to tap into. You will learn so much not only from the sessions but also the conversation with all these other talented people

Come along.

Simply coming along will bring benefit. Other people will be interested in what you have to say even if you are in week 0 of your first ever job. Your view will still be valued. Everyone can learn from everybody and NO-ONE knows it all.

There will come a point where you will pass on a piece of knowledge or an idea or a way of working and someone will say thank you I didn’t know that. You just gave back. It may even be someone you look up to, someone whose knowledge and experience far outweighs yours whose word you hang on to. That feels good.

You may ask the questions that others thought but didnt ask and boy are they glad you asked the question. You just gave back. It’s something I do often. I ask questions and sometimes I ask questions I know the answer to so that they will be of benefit to the group.

What will you get? More than you can ever put in. Free training, often free pizza, knowledge, advice,guidance, contacts, support, a network of people in your field, notice of job openings, swag, fun, friends, more social events and more and more

The user groups are run by volunteers in their own time out of the goodness of their hearts. They will always need your help. Turn up 5 minutes earlier and help set out the chairs or put out the handouts or assist with the tech. You just gave back. Afterwards before going to the pub clear the tables, help carry the boxes, put the pizza evidence in the bin. You just gave back

SQL Saturdays and other community events

SQL Saturdays are held all over the world most every Saturday You can find more about them here https://www.sqlsaturday.com/ There are also other larger events such as SQL Bits and SQL Relay here in the UK. Everything I wrote about User groups counts here just in a slightly larger scale. You will be able to attend several sessions across many different areas for free on a Saturday

These events are also run by volunteers and they will also need your help. If you can spare some time to help on a registration desk you just gave back. A room monitor to ensure the speaker and delegates have everything they need, the room is tidy and the session runs to time. You just gave back. Putting things out and tidying them away again. You just gave back.

You can become a volunteer by asking the people organising the events if they would like your help. These events will all have twitter feeds and emails and facebook pages and many methods of getting in touch. Contact them and offer your help if you can. You just gave back.

If you fancy taking the next step then you can get involved in organising the events. This is hard work, great fun, a good thing to add to your CV and you just gave back. There are so many areas to get involved organising an event. Premises and technology, speakers and printers, volunteers and sponsors all need co-ordination. Websites,twitter feeds, feedback forms, posters, marketing materials all need designing and producing. There are so many ways in which you will be able to provide value to the event and you just gave back

Oh and whilst I am at it, when you attend an event

Say Thank You to the volunteers. You just gave back.

 Speaking and Blogging

All the events named above need speakers. The bigger events like the SQL Saturdays and the large events like SQL Bits will generally have more established speakers but every user group will need speakers and they will be more likely to accept new speakers and will be very supportive if you say that you are a new speaker. Every speaker had to make their first presentation at some point and they all know how it feels and can provide guidance and advice. You will feel that you don’t have anything to speak about that others will want to hear about. You do. Your experience and your knowledge or how you solved something or created something will be of interest to people. Of course, you need to check with the user group leaders and members if your idea for a presentation is suitable. Like anything you do that is new, researching it and taking advice from people with more experience is always useful. Maybe you can start with a lightning talk. Give it a go. You just gave back.

What do you get back from Speaking and Blogging?

I’ll tell you a secret. The vast majority of my posts ( This is an exception) are written for the benefit of one person. Me.

I write posts to record my learning for myself. To document my knowledge. I use my posts to help me to do my job. I know that I wrote the method of doing something somewhere and this is part of my store of knowledge. I write posts to answer peoples questions so that I have a place to point them to. Occasionally people will email me asking a question and if it requires a detailed response I will write a post and email them to tell them that this is the answer. I often point my work colleagues at my blog when they ask me questions about Azure or Powershell. You could also see your blog as an extension of your CV and use it when job hunting and develop it in that way

I also write posts to expand my knowledge and this is the same for speaking. When I am writing a blog post or a presentation I will read and watch videos and ensure I know more about it. The process of creating that content will improve my own knowledge and work practices and you will find that, as you write your blog posts you will have a deeper knowledge also. When you give your presentations you will learn as you answer questions or find the answer to the question afterwards (It’s ok to do that) that you are improving yourself and your knowledge.You will also be giving back.

Putting your information online will enable people to find it. Sure you can worry about SEO and getting to the top of search pages but you know that sometimes the answer is on the ninth page. What you write will be of benefit to others and by taking the time to post you will be giving back to the community

You can do one, many or all of those and you will be giving back. I hope you do

I will be giving back. You will find me at SQL Bits where I shall be room monitoring and volunteering.

You will find me at SQL Saturday Exeter. I am again one of the fabulous SQL South West team

who are again organising a SQL Saturday in Exeter in the UK on April 24th/25th 2015 You can find out more here http://sqlsouthwest.co.uk/sql-saturday-372/ 

You still have time, if you are quick, to submit a session to speak or present a pre-con at Exeter. Submissions close on 15th December and we would love to have yours

I shall carry on blogging and hopefully present at some user groups again this year. If you see me any where, come up and say hi to me. You just gave back

 

 

 

Making a Change Log Easier With PowerShell

Having a Change Log is a good thing. A quick and simple place to find out what has changed on a server and when. This can be invaluable when troubleshooting, matching a change to a symptom especially when assessed alongside your performance counter collection. Here is a simple way to make use of a change log and automate it

Create a simple table


USE [MDW]
GO

CREATE TABLE [dbo].[ChangeLog](
 [ChangeID] [int] IDENTITY(1,1) PRIMARY KEY ,
 [Date] [datetime] NOT NULL,
 [Server] [varchar](50) NOT NULL,
 [UserName] [nvarchar](50) NOT NULL,
 [Change] [nvarchar](max) NOT NULL,
)

GO

You can keep this on a central server or create a database on each server, whichever fits your needs best. You can add other columns if you want your information in a different format

Once you have your table you can create a couple of Powershell functions to easily and quickly add to and retrieve data from the table. I make use of Invoke-SQLCMD2 in these functions

This can then be included in any automation tasks that you use to update your environments whether you are using automated deployment methods for releases or using SCCM to patch your environments making it easy to update and also easy to automate by making it part of your usual deployment process.

To add a new change

<#
.Synopsis
 A function to add a ChangeLog information
.DESCRIPTION
 Load function for adding a change to the changelog table in the MDW database on MDWSERVER.
 Use Get-ChangeLog $Server to see details
 Inputs the username of the account running powershell into the database as the user
REQUIRES Invoke-SQLCMD2

http://sqldbawithabeard.com

.EXAMPLE
 Add-ChangeLog SERVERNAME "Altered AutoGrowth Settings for TempDB to None"

 Adds ServerName UserName and Altered AutoGrowth Settings for TempDB to None to the change log table
#>
Function Add-ChangeLog
{
[CmdletBinding()]
Param(
 [Parameter(Mandatory=$True)]
 [string]$Server,

 [Parameter(Mandatory=$True)]
 [string]$Change
)

$UserName = $env:USERDOMAIN + '\' + $env:USERNAME

$Query = "INSERT INTO [dbo].[ChangeLog]
 ([Date]
 ,[Server]
 ,[UserName]
 ,[Change])
 VALUES
 (GetDate()
 ,'$Server'
 ,'$UserName'
 ,'$Change')
"
Invoke-Sqlcmd2 -ServerInstance MDWSERVER -Database "MDW" -Query $Query -Verbose
}

You can then run

Add-ChangeLog SERVERNAME "Added New Database SuperAppData"

to add the change to the change log

To retrieve the data you can use

<#
.Synopsis
 A function to get ChangeLog information
.DESCRIPTION
 Load function for finding ChangeLog information. Information is selected from the MDW Database on SERVERNAME
REQUIRES Invooke-SQLCMD2

http://sqldbawithabeard.com

.EXAMPLE
 Get-ChangeLog SERVERNAME
#>
Function Get-ChangeLog
{
 [CmdletBinding()]
 [OutputType([int])]
 Param
 (
 # Server Name Required
 [Parameter(Mandatory=$true,]
 $Server
 )

$a = @{Expression={$_.Date};Label="Date";width=15}, `
@{Expression={$_.Server};Label="Server";width=10},
@{Expression={$_.UserName};Label="UserName";width=20}, `
@{Expression={$_.Change};Label="Change";width=18}

Invoke-Sqlcmd2 -ServerInstance MDWSERVER -Database "MDW" -Query "SELECT * FROM dbo.ChangeLog WHERE Server = '$Server';" -Verbose|Format-table $a -Auto -Wrap

}

and use

Get-ChangeLog SERVERNAME

To find out what changed when. Happy Automating

A look at the SQL Assessment Intelligence Pack in Operational Insights

Operational Insights is a service that has been added in preview to Azure. It enables you to collect, combine, correlate and visualize all your machine data in one place. It can collect data from all of your machines either via SCOM or by using an agent. Once the data is collected Operational Insights has a number of Intelligence Packs which have pre-configured rules and algorithms to provide analysis in various areas including for SQL Server

http://azure.microsoft.com/en-gb/services/operational-insights/

I thought I would take a look. I have an installation of SCOM in my lab on my laptop and I read the instructions to see how to connect it to Operational Insights. (You don’t have to have a SCOM installation to use Operational insights you can make use of an agent as well just follow the steps from the page below)

http://azure.microsoft.com/en-us/trial/operational-insights-get-started/

It really is very simple

If you have an Azure subscription already you can sign into the portal and join the preview program by clicking

New –> App Services –> Operational Insights

and create a new Operational Insights Workspace.

Once you have done that, if you have an installation of SCOM 2012 you need to be running Service Pack 1 and download and install the System Center Operational Insights Connector for Operations Manager and import the MPB files into SCOM.

If you have SCOM 2012R2 the connector is already installed and to connect your SCOM to Operational Insights is very very easy as you can see on

http://azure.microsoft.com/en-us/trial/operational-insights-get-started/?step2=withaccount&step3=SCOMcustomer

  1. In the Operations Manager Console, click Administration.
  2. Under Administration, select System Center Advisor, and then click Advisor Connection.
  3. Click Register to Advisor Service.
  4. Sign in with your Microsoft or Organizational account.
  5. Choose an existing Operational Insights workspace from the drop down menu
  6. Confirm your changes.
  7. In the System Center Advisor Overview page, Under Actions, click Add a Computer/Group.
  8. Under Options, select Windows Server or All Instance Groups, and then search and add servers that you want data

That is it. No really, that is it. I was amazed how quickly I was able to get this done in my lab and it would not take very long in a large implementation of SCOM either as you will have your groups of computers defined which will make it easy to decide which groups to use. You could use a separate workspace for each type of server or split up the information per service. It really is very customisable.

Once you have done that, go and add some of the Intelligence Packs. Each intelligence pack will change the amount  and type of data that is collected. At November 23rd there are

Alert Management – for your SCOM Alerts

Change Tracking – Tracking Configuration Changes

Log Management – for event log collection and interrogation

System Update Assessment – Missing Security Updates

Malware Assessment – Status of Anti-Malware and Anti-Virus scans

Capacity Planning – Identify Capacity and Utilisation bottlenecks

SQL Assessment – The risk and health of SQL Server Environment

There are also two ‘coming soon’ Intelligence packs

AD Assessment – Risk and health of Active Directory

Security – Explore security related data and help identify security breaches

You then (if you are like me) have a period of frustration whilst you wait for all of the data to be uploaded and aggregated but once it is you sign into the Operational Insights Portal

https://preview.opinsights.azure.com and it will look like this

opsman1

There is a lot of information there. As it is on my laptop and the lab is not running all of the time and is not connected to the internet most of the time I am not surprised that there are some red parts to my assessment!!

Obviously I was interested in the SQL Assessment and I explored it a bit further

Clicking on the SQL Assessment tile takes you to a screen which shows the SQL Assessment broken down into 6 Focus areas

Security and Compliance, Availability and Business Continuity, Performance and Scalability, Upgrade, Migration and  Deployment, Operations and Monitoring and Change and Configuration Management. MSDN http://msdn.microsoft.com/en-us/library/azure/dn873967.aspx gives some more information about each one

Security and Compliance – Safeguard the reputation of your organization by defending yourself from security threats and breaches, enforcing corporate policies, and meeting technical, legal and regulatory compliance requirements.

Availability and Business Continuity – Keep your services available and your business profitable by ensuring the resiliency of your infrastructure and by having the right level of business protection in the event of a disaster.

Performance and Scalability – Help your organization to grow and innovate by ensuring that your IT environment can meet current performance requirements and can respond quickly to changing business needs.

Upgrade, Migration and Deployment – Position your IT department to be the key driver of change and innovation, by taking full advantage of new enabling technologies to unlock more business value for organizational units, workforce and customers.

Operations and Monitoring – Lower your IT maintenance budget by streamlining your IT operations and implementing a comprehensive preventative maintenance program to maximize business performance.

Change and Configuration Management – Protect the day-to-day operations of your organization and ensure that changes won’t negatively affect the business by establishing change control procedures and by tracking and auditing system configurations.

You will be able to see some dials showing you how well you are doing in each area for the servers whose data has been collected.

opsman2

Each area will have the High Priority Recommendations shown below the dial and you can click on them to see more information about those recommendations

opsman3

You can also click the dial or the see all link to enter the search area where you can customise how you wish to see the data that has been collected, this looks a bit confusing at first

opsman4

The top bar contains the search , the timescale and some buttons to save the search, view the saved searches and view the search history, all of which will be shown in the right hand column below

The left column contains a bar graph for the search and all of the filters. The middle column contains the results of the search and can be viewed in list or tabular format and exported to CSV using the button below. A little bit of experimentation will give you a better understanding of how the filtering works and how you can make use of that for your environment

By looking at the search for the Operations and Monitoring Focus Area shown above

Type:SQLAssessmentRecommendation IsRollup=true RecommendationPeriod=2014-11 FocusArea=”Operations and Monitoring” RecommendationResult=Failed | sort RecommendationWeight desc

I saw that RecommendationResult=Failed and changed it to RecommendationResult=Passed. This enabled me to see all of the Recommendations that had been passed in the Focus Area and clicking the export button downloaded a csv file. I deleted RecommendationResult=Passed from the search and that gave me all of the recommendations that made up that Focus Area

Operations and Monitoring Focus Area

Recommendation
Enable Remote Desktop on servers.
Enable Remote Desktop on virtual machines.
Ensure computers are able to download updates.
Configure event logs to overwrite or archive old events automatically.
Review event log configuration to ensure event data is retained automatically. This relates to System Logs
Review event log configuration to ensure event data is retained automatically. This relates to Application Logs

I decided then to do the same for each of the Focus Areas for the SQL Assessment Intelligence Pack

Security and Compliance Focus Area

Recommendation
Change passwords that are the same as the login name.
Remove logins with blank passwords.
LAN Manager Hash for Passwords Stored
Investigate why unsigned kernel modules were loaded.
Apply security best practices to contained databases.
Enable User Account control on all computers.
Consider disabling the xp_cmdshell extended stored procedure.
Implement Windows authentication on Microsoft Azure-hosted SQL Server deployments.
Avoid using the Local System account to run the SQL Server service.
Avoid adding users to the db_owner database role.
Ensure only essential users are added to the SQL Server sysadmin server role.
Disable SQL Server guest user in all user databases.
Avoid running SQL Server Agent jobs using highly-privileged accounts.
Configure the SQL Server Agent service to use a recommended account.
Apply Windows password policies to SQL Server logins.
Investigate failures to validate the integrity of protected files.
Investigate failures to validate kernel modules.

Availability and Business Continuity Focus Area

Recommendation
Schedule full database backups at least weekly.
Optimize your backup strategy with Microsoft Azure Blob Storage.
Avoid using the Simple database recovery model.
Ensure all installations of Windows are activated.
Investigate logical disk errors.
Reduce the maximum Kerberos access token size.
Investigate connection failures due to SSPI context errors.
Set the PAGE_VERIFY database option to CHECKSUM.
Increase free space on system drives.
Investigate a write error on a disk.
Check the network access to Active Directory domain controllers.
Review DNS configuration on non-DNS servers.
Increase free space on system drives.
Investigate memory dumps.
Increase free space on system drives.
Investigate why the computer shut down unexpectedly.
Enable dynamic DNS registration for domain-joined servers.

Performance and Scalability Focus Area

Recommendation
Increase the number of tempdb database files.
Configure the tempdb database to reduce page allocation contention.
Ensure all tempdb database files have identical initial sizes and growth increments.
Set autogrowth increments for database files and log files to fixed values rather than percentage values.
Set autogrowth increments for transaction log files to less than 1GB.
Modify auto-grow settings to use a fixed size growth increment of less than 1GB and consider enabling Instant File Initialization.
Change your Affinity Mask and Affinity I/O MASK settings to prevent conflicts.
Resolve issues caused by excessive virtual log files.
Modify the database file layout for databases larger than 1TB.
Set the AUTO_CLOSE option to OFF for frequently accessed databases.
Review memory requirements on servers with less than 4GB of physical memory installed.
Configure system SiteName properties to be dynamic.
Align the Max Degree of Parallelism option to the number of logical processors.
Align the Max Degree of Parallelism option to the number of logical processors.
Consider disabling the AUTO_SHRINK database option.
Review memory requirements on computers with high paging file use.
Ensure SQL Server does not consume memory required by other applications and system components.
Consider changing your power saving settings to optimize performance.
Increase the initial size of the tempdb database.
Review the configuration of Maximum Transfer Unit (MTU) size.
Review your paging file settings.
Review and optimize memory cache configuration.
Review the configuration of Maximum Transfer Unit (MTU) size.
Review the system processor scheduling mode.
Review network provider ordering settings.
Remove invalid entries from the PATH environment variable.
Remove network entries from the PATH environment variable.
Investigate processes that use a large number of threads.
Avoid hosting user database files on the same disk volume as tempdb database files.
Review processes with large working set sizes.
Reduce the length of the PATH environment variable.
Reduce the number of entries in the PATH environment variable.
Ensure SQL Server does not consume memory required by other applications and system components.
Enable the backup compression default configuration option.
Ensure the DNS Client service is running and is set to start automatically.
Consider compressing database tables and indexes.

Upgrade, Migration and Deployment Focus Area

Recommendation
Ensure all devices run supported operating system versions.
Ensure that the guest user is enabled in the msdb database.
Avoid using the Affinity64 Mask configuration setting in new development work.
Avoid using the Affinity Mask configuration setting in new development work.
Avoid using the Affinity I/O Mask configuration setting in new development work.
Avoid using the Allow Updates configuration option in SQL Server.
Avoid using the Allow Updates configuration option in SQL Server.
Avoid using the Affinity64 I/O Mask configuration setting in new development work.
Configure SQL Server to accept incoming connections.
Configure SQL Server instances and firewalls to allow communication over TCP/IP.

As I have no data for Change and Configuration Management I was not able to see the recommendations in my Operation Insights Workspace.

Edit: Daniele Muscetta has said in the comments that this is a bug which is being tracked

As you can see from the type and description of the recommendations above these are all areas that a DBA will be concerned about and the benefit of having all of this information gathered, pre-sorted, prioritised and presented to you in this manner will enable you to work towards a better SQL environment and track your progress. You can read more about the SQL Assessment Intelligence Pack here

http://msdn.microsoft.com/en-us/library/azure/dn873958.aspx

As well as the pre-determined queries that are built into the Intelligence pack you can search your data in any way that you require enabling you to present information about the health and risk of your SQL Environment to your team or your management with ease. The “with ease” bit is dependent on you understanding the language and structure of the search queries.

You will need to put this page into your bookmarks

http://msdn.microsoft.com/library/azure/dn873997.aspx

As it contains the syntax and definitions to search your data

A very useful page for a starter like me is

http://blogs.msdn.com/b/dmuscett/archive/2014/10/19/advisor-searches-collection.aspx

by Daniele Muscetta which has a list of useful Operational Insights search queries such as

SQL Recommendation by Computer

Type=SQLAssessmentRecommendation IsRollup=false RecommendationResult=Failed | measure count() by Computer

If you click the star to the right of the search box you will find the saved searches. For the SQL Assessment Intelligence Pack there are

Did the agent pass the prerequisite check (if not, SQL Assessment data won’t be complete)
Focus Areas
How many SQL Recommendation are affecting a Computer a SQL Instance or a Database?
How many times did each unique SQL Recommendation trigger?
SQL Assesments passed by Server
SQL Recommendation by Computer
SQL Recommendation by Database
SQL Recommendation by Instance

You can use these and you can save your own searches which show the data in a way that is valuable to you.

Overall I am impressed with this tool and can see how it can be beneficial for a DBA as well as for System Administrators. I was amazed how easy it was to set up and how quickly I was able to start manipulating the data once it had been uploaded.

Changing Delay Between Responses for SQL Alerts with Powershell

So you have read that you should have alerts for severity levels 16 to 24 and 823,824 and 825 on SQLSkills.com or maybe you have used sp_blitz and received the Blitz Result: No SQL Server Agent Alerts Configured and like a good and conscientious DBA you have set them up.

Hopefully you also have Jonathan Allens blog on your feed and if you look at his historical posts and seen this one where lack of a delay in response broke the Exchange Server!

However sometimes the oft used delay between responses of 1 minute is too much. Alerts should be actionable after all and maybe you sync your email every 15 minutes and don’t need to see 15 alerts for the same error or you decide that certain level of errors require a lesser response and therefore you only need to know about them every hour or three. Or possibly you want to enforce a certain delay for all servers and want to set up a system to check regularly and enforce your rule

Whatever the reason, changing the delay between response for every alert on every server with SSMS could be time consuming and (of course) I will use Powershell to do the job.

To find the alerts I follow the process I use when finding any new property in powershell


$server = 'SERVERNAME'

$srv = New-Object Microsoft.SqlServer.Management.Smo.Server $Server

I know that the Alerts will be found under the JobServer Property


$srv.JobServer.Alerts|Get-Member

Shows me

DelayBetweenResponses   Property   int DelayBetweenResponses {get;set;}

And

 Alter                   Method     void Alter(), void IAlterable.Alter()

So I use both of those as follows


Foreach($Alert in $srv.JobServer.Alerts)

{

$Alert.DelayBetweenResponses = 600 # This is in seconds

$Alert.Alter()

}

And place it in a foreach loop for the servers I want to change. If I only want to change certain alerts I can do so by filtering on Name


Foreach($Alert in $srv.JobServer.Alerts|Where-Object {$_.Name -eq 'NameOfAlert'})

Or by category


Foreach($Alert in $srv.JobServer.Alerts|Where-Object {$_.CategoryName -eq 'Category Name'})

When you have 5 minutes go and look at the results of


$srv.JobServer|Get-Member

And explore and let me know what you find

#tsql2sday #60 – Something New Learned – Problem Step Recorder

What is T-SQL Tuesday?

T-SQL Tuesday is a monthly blog party hosted by a different blogger each month. This blog party was started by Adam Machanic (blog|twitter). You can take part by posting your own participating post that fits the topic of the month and follows the requirements Additionally, if you are interested in hosting a future T-SQL Tuesday, contact Adam Machanic on his blog.

This month’s blog party is hosted by Chris Yates blog |twitter who asked people to share something newly learned.

I love being a part of the SQL community. It gives me the opportunity to learn as much as I want to about anything I can think of within the data field. In the last couple of months I have presented at Newcastle User Group and learnt about migrating SQL using Powershell with Stuart Moore. At our user group in Exeter http://sqlsouthwest.co.uk/ we had Steph Middleton talking about version control for databases and lightning talks from Pavol Rovensky on Mocking in C# ,John Martin on Azure fault domains and availability sets using a pen and a whiteboard!, Annette Allen on Database Unit Testing,Terry McCann  on SQL Certifications. We also had Jonathan Allen talking about some free tools and resources to help manage both large and small SQL environments.  I went to SQL Relay in Southampton and saw Stuart Moore (again!) Scott Klein Alex Yates James Skipworth and I joined the PASS DBA fundamentals virtual chapter webinar for Changing Your Habits to Improve the Performance of Your T-SQL by Mickey Stuewe and that’s only the ‘in-person’ learning that I did. I also read a lot of blog posts!

But instead of repeating what I learnt from others within the community I thought I would write a blog post that I have been meaning to write for a few weeks about a solution pre-built into Windows that appears to not be well known. Problem Step Recorder.

What is PSR?

I found out about a little known tool included in Windows Operating System a couple of months ago which enables you to record what you are doing by taking screenshots of every mouse click. The tool is Step Recorder also known as PSR. It is included by default in Windows 7 , Windows 8 and 8.1 and Windows Server 2008 and above.

What does it do?

Simply put, it records “This is what I did” There are many situations when this can be useful

  • You can use this during installations to help create documentation. “This is what I did” when I installed X and now you can follow those steps and I know I haven’t missed anything.
  • You can use it when communicating with 3rd parties or other support teams. “This is what I did” when I got this error and here are all of the steps so that you can re-create the issue and I know that I haven’t missed anything
  • You can use this when resolving high priority incidents. “This is what I did” when System X broke, it includes all of the times of my actions.
    I still keep my notepad by my keyboard out of habit but I have a record of the exact steps that I took to try to resolve the issue which will be very useful for reporting on the incident in the near future and also placing into a Knowledge Base for others to use if it happens again and I know I haven’t missed anything
  • For assisting family members. Like many, I am “The IT guy” and PSR enables me to provide clear instructions with pictures showing exactly where I clicked to those family members who are having trouble with “The internet being broken”

It does this by automatically taking a screen shot after every mouse click or program event with a timestamp and a description of what happened. It does not record keystrokes though so if you need to record what you have typed there is some manual steps required

So how do you access PSR?

Simple. Type “psr” into the run box, cmd or PowerShell and it will open

Untitled picture

Once you click on Start Record it will start recording your clicks and taking screenshots. However I always open the settings by clicking on the drop down to the left of the help icon first and change the number of recent screen captures to store to the maximum value of 100.

1Untitled picture

If you do not you will get no warning but PSR will only save the last 25 screenshots it takes and your results will look like the below. It will still record your actions but not keep the screenshots.

Previous Next

Step 16: (‎09/‎11/‎2014 13:47:45) User left click on “Chris Yates (@YatesSQL) | Twitter (tab item)”

No screenshots were saved for this step.

Previous Next

Step 17: (‎09/‎11/‎2014 13:47:47) User left click on “The SQL Professor | ‘Leadership Through Service’ (text)”

No screenshots were saved for this step.

Previous Next

Step 18: (‎09/‎11/‎2014 13:47:47) User left click on “T-SQL Tuesday #60 – Something New Learned | The SQL Professor (text)” in “T-SQL Tuesday #60 – Something New Learned | The SQL Professor – Google Chrome”

untitled

You can also set the name and location of the saved file in the settings but if you leave it blank it will prompt for a location and name once you click Stop Record

How do I add keyboard input?

PSR allows you add keyboard input manually. You may need this if you need to include the text you have entered into prompts or address bars or if you wish to add further comment. You can do this by clicking add comment, drawing a box around the relevant part of the screen for the text input and inputting the text into the box

2Untitled picture

In the results this looks like

Step 1: (‎09/‎11/‎2014 12:56:22) User Comment: “http://www.microsoft.com/en-gb/download/details.aspx?id=42573

untitled1

What do the results look like?

Once you have finished the actions that you want to record (or when you think you are close to 100 screenshots) click stop record and the following screen will be displayed

3Untitled picture

This allows you to review what PSR has recorded. You can then save it to a location of your desire. It is saved as a zip file which has a single .mht file in it. You can open the file without unzipping the archive and it will open in Internet Explorer. As you can see from the shots below you can run PSR on your client and it will still record actions in your RDP sessions although it does not record as much detail. The first two are on my SCOM server in my lab and the second two are on the laptop using the SCOM console

Previous Next

Step 11: (‎09/‎11/‎2014 13:02:13) User left click on “Input Capture Window (pane)” in “SCOM on ROB-LAPTOP – Virtual Machine Connection”

untitled2

Previous Next

Step 12: (‎09/‎11/‎2014 13:02:16) User left click on “Input Capture Window (pane)” in “SCOM on ROB-LAPTOP – Virtual Machine Connection”

untitled3

Previous Next

Step 13: (‎09/‎11/‎2014 13:06:25) User right click on “Management Packs (tree item)” in “Agent Managed – THEBEARDMANAGEMENTGROUP – Operations Manager”

untitled4

Previous Next

Step 14: (‎09/‎11/‎2014 13:06:27) User left click on “Import Management Packs… (menu item)”

untitled5

You can then use the zip file as you wish. Maybe you email it to your third party support team (once you have edited any confidential data) or you can attach it to your incident in your IT Service Management solution or attach it to a report. If you wish to create documentation you can open the .mht file in Word, edit it as you see fit and save it appropriately.

So that is one of the many things that I have learnt recently and I am looking forward to seeing what others have learnt especially as many will have just been to the SQL PASS Summit. You will be able to find the other posts in this blog party in the comments on Chris’s page

Generating T-SQL Randomly with Powershell

I have a lab on my laptop running various servers so that I can problem solve and learn and recently I wanted to add several months of data into a database. I had created a stored procedure to take some parameters perform some logic and insert the data.

To execute the stored procedure in T-SQL I simply run this

EXECUTE [dbo].[usp_Insert_DriveSpace] 'Server1','C','2014-11-05','100','25'

which uses the server name, drive letter, date, capacity and free space to add the data

In my wisdom I decided to create some data that was more ‘real-life’ I was interested in storing drive space data and will be learning how to write reports on it. To do this I had pre-populated some tables in the database with 10 Server Names each with 5 drives so I needed 10*5*90 or 4500 statements

I wanted to populate this with about 3 months of data as if it had been gathered every day. I read this post about using CTEs to create sequences and I am sure it can be done this way but I don’t have the T-SQL skills to do so. If someone can (or has) done that please let me know as I am trying to improve my T-SQL skills and would be interested in how to approach and solve this problem with T-SQL

I solved it with Powershell in this way.

Created an array of Servers and an array of Drives to enable me to iterate though each.

$Servers = 'Server1','Server2','Server3','Server4','Server5','Server6','Server7','Server8','Server9','Server10'
$Drives = 'C','D','E','F','G'

Set the drive capacity for each drive. To make my life slightly easier I standardised my ‘servers’

$CDriveCapacity = 100
$DDriveCapacity = 50
$EDriveCapacity = 200
$FDriveCapacity = 200
$GDriveCapacity = 500

I needed to create a date. You can use Get-Date to get todays date and to get dates or times in the future or the past you can use the AddDays() function. You can also add ticks, milliseconds, seconds, minutes, hours, months or years

(Get-Date).AddDays(1)

I then needed to format the date. This is slightly confusing. If you just use Get-Date to get the current date (time) then you can use the format or uformat switch to format the output

Get-Date -Format yyyyMMdd
Get-Date -UFormat %Y%m%d

However this does not work once you have used the AddDays() method. You have to use the ToString() method

 $Date = (get-date).AddDays(-7).ToString('yyyy-MM-dd')

To replicate gathering data each day I decided to use a while loop. I set $x to –95 and pressed CTRL and J to bring up Snippets and typed w and picked the while loop. You can find out more about snippets in my previous post I started at –95 so that all the identity keys incremented in a real-life manner oldest to newest.

$x = -98
while ($x -le 0)
{
    $Date = (get-date).AddDays($x).ToString('yyyy-MM-dd')

    foreach($Server in $Servers)
    {
        foreach ($Drive in $Drives)
        {

I could then use the while loop to generate data for each day and loop through each server and each drive and generate the T-SQL but I wanted more!

I wanted to generate some random numbers for the free space available for each drive. I used the Get-Random cmdlet If you are going to use it make sure you read this post to make sure that you don’t get caught by the gotcha. I decided to set the free space for my OS,Data and Log Files to somewhere between 70 and 3 Gb free as in this imaginary scenario these drives are carefully monitored and the data and log file sizes under the control of a careful DBA but still able to go below thresholds.

if($Drive -eq 'C')
            {
            $Free = Get-Random -Maximum 70 -Minimum 3

I set the TempDB drive to have either 4,7 or 11 Gb free so that i can try to colour code my reports depending on values and if one field only has three values it makes it simpler to verify.

I set the Backup Drive to somewhere between 50 and 0 so that I will hit 0 sometimes!!

Here is the full script. It generated 4500 T-SQL statements in just under 16 seconds

$Servers = 'Server1','Server2','Server3','Server4','Server5','Server6','Server7','Server8','Server9','Server10'
$Drives = 'C','D','E','F','G'
$CDriveCapacity = 100
$DDriveCapacity = 50
$EDriveCapacity = 200
$FDriveCapacity = 200
$GDriveCapacity = 500

$x = -98
while ($x -le 0)
{
    $Date = (get-date).AddDays($x).ToString('yyyy-MM-dd')

    foreach($Server in $Servers)
    {
        foreach ($Drive in $Drives)
        {
            if($Drive -eq 'C')
            {
            $Free = Get-Random -Maximum 70 -Minimum 3
            Write-Host &quot;EXECUTE [dbo].[usp_Insert_DriveSpace] '$Server','$Drive','$Date','$CDriveCapacity','$Free'&quot;
            }
            elseif($Drive -eq 'D')
            {
            $Free = Get-Random -InputObject 4,7,11
            Write-Host &quot;EXECUTE [dbo].[usp_Insert_DriveSpace] '$Server','$Drive','$Date','$DDriveCapacity','$Free'&quot;
            }
            elseif($Drive -eq 'E')
            {
            $Free = Get-Random -Maximum 70 -Minimum 3
            Write-Host &quot;EXECUTE [dbo].[usp_Insert_DriveSpace] '$Server','$Drive','$Date','$EDriveCapacity','$Free'&quot;
            }
            elseif($Drive -eq 'F')
            {
            $Free = Get-Random -Maximum 70 -Minimum 3
            Write-Host &quot;EXECUTE [dbo].[usp_Insert_DriveSpace] '$Server','$Drive','$Date','$FDriveCapacity','$Free'&quot;
            }
            elseif($Drive -eq 'G')
            {
            $Free = Get-Random -Maximum 50 -Minimum 0
            Write-Host &quot;EXECUTE [dbo].[usp_Insert_DriveSpace] '$Server','$Drive','$Date','$GDriveCapacity','$Free'&quot;
            }
        }
    }
    $X++
}

Once it had run I simply copied the output into SSMS and was on my way

Emailing Disk Space Alerting With Powershell

 

A DBA doesn’t want to run out of space on their servers, even in their labs! To avoid this happening I wrote a Powershell script to provide some alerts by email.

This is the script and how I worked my way through the solution. I hope it is of benefit to others.

The script works in the following way

  • Iterates through a list of servers
  • Runs a WMI query to gather disk information
  • If the free space has fallen below a threshold, checks to see if it has emailed before and if not emails a warning
  • Resets if free space has risen above the threshold
  • Logs what it does but manages the space the logs use

As you will have seen before I use a Servers text file in my scripts. This is a text file with a single server name on each line. You could also use a query against a DBA or MDW database using Invoke-SQLCMD2, which ever is the most suitable for you.

 $Servers = Get-Content 'PATH\TO\Servers.txt' foreach($Server in $Servers) { 

The WMI query is a very simple one to gather the disk information. I format the results and place them in variables for reuse

 $Disks = Get-WmiObject win32_logicaldisk -ComputerName $Server | Where-Object {$_.drivetype -eq 3} $TotalSpace=[math]::Round(($Disk.Size/1073741824),2) # change to gb and 2 decimal places $FreeSpace=[Math]::Round(($Disk.FreeSpace/1073741824),2)# change to gb and 2 decimal places $UsedSpace = $TotalSpace - $FreeSpace $PercentFree = [Math]::Round((($FreeSpace/$TotalSpace)*100),2)# change to gb and 2 decimal places 

Use a bit of logic to check if the freespace is below a threshold and see if the email has already been sent

 # Check if percent free below warning level if ($PercentFree -le $SevereLevel) { # if text file has been created (ie email should already have been sent) do nothing if(Test-Path $CheckFileSevere) {} # if percent free below warning level and text file doesnot exist create text file and email else { 

If it has not create a unique named text file and create the email body using HTML and the values stored in the variables

 New-Item $CheckFileSevere -ItemType File #Create Email Body $EmailBody = '' $EmailBody += " " 

and then send it

 $Subject = "URGENT Disk Space Alert 1%" $Body = $EmailBody $msg = new-object Net.Mail.MailMessage $smtp = new-object Net.Mail.SmtpClient($smtpServer) $smtp.port = '25' $msg.From = $From $msg.Sender = $Sender $msg.To.Add($To) $msg.Subject = $Subject $msg.Body = $Body $msg.IsBodyHtml = $True $smtp.Send($msg) 

If the freespace is above all of the warning levels, check for existence of the text file and delete it if found so that the next time the script runs it will send an email.

 if(Test-Path $CheckFile) { Remove-Item $CheckFile -Force

To enable logging create a log file each day

 $Logdate = Get-Date -Format yyyyMMdd $LogFile = $Location + 'logfile' + $LogDate+ '.txt' # if daily log file does not exist create one if(!(Test-Path $LogFile)) { New-Item $Logfile -ItemType File 

And write the info to it at each action

 $logentrydate = (Get-Date).DateTime $Log = $logentrydate + ' ' + $ServerName + ' ' + $DriveLetter + ' ' + $VolumeName + ' ' + $PercentFree +' -- Severe Email Sent' Add-Content -Value $Log -Path $Logfile

Making sure that you clean up after

 # any logfiles older than 7 days delete Get-ChildItem -Path $Location *logfile* |Where-Object {$_.LastWriteTime -gt (Get-Date).AddDays(7) }|Remove-Item -Force 

I run the script in a Powershell Step in an SQL Agent Job every 5 minutes and Now I know when my servers in my lab are running out of space with an email like this

image

You can find the script here