Musings of a Data professional

Stuart Moore

Tag: disaster recovery

Using the SQLAutoRestores PowerShell Module

PowerShellThe SQLAutoRestores module as currently stands (04/08/2016  v 0.9.0.0) is very much based on my personal usage patterns, but the plan is to make it as flexible as possible. So if you see something in this workflow that you’d like to change, or need something to cope with your SQL Sever backup procedures, then please leave a comment below, or drop in a feature request at github – https://github.com/Stuart-Moore/SQLAutoRestores/issues

Current practice at work is that all SQL Server databases are backed up, either to a central share, or locally and then transferred to the central share. So we end up with a backup folder structure like:

Root 
 |
 +-Server1
 |    +-DB1
 |    +-DB2
 |
 +-Server2
      +-DB1
      +-DB2

I want to randomly pick a set of backup files to restore. So the first thing I want to do is to get all the ‘Bottom folders’ of this directory tree:

$folders = Get-BottomFolders \\backupserver\share

And if I’ve split the backups across multiple shares I can add more:

$folders += Get-BottomFolders \\server2\backups$\

So now I want to pick a folder at random from those. But, not everyone cleans up after themselves, so there could be folders that don’t contain anything we’re interested in. So there’s a Test-DBBackupsExist function that makes sure we have a file. So we loop until we get something useful:

$RestoreFolder = Get-RandomElement $folders
while (!(Test-DBBackupsExist $RestoreFolder)){
    $RestoreFolder = Get-RandomElement $folders
}

Great, we’ve got a folder with some SQL Server backups in it. Now we need to see what’s in the backup files. So we do a scan of the file headers, which needs a SQL Server, so we build a SQL Server connection as well:

$SQLconnection = New-SQLConnection 'server1\instance2'
$BackupObjects = Get-DBBackupObject -InputPath $RestoreFolder -ServerInstance $SQLconnection

This returns a simple PowerShell Object containing the header hightlights from each file in the folder.

Note; at this point we’ve not checked we’ve gotten a complete restorable set of files. For all we know, we got 30 Transaction log files and no Full Backup to start from!

I prefer to restore databases to random points in time rather than just the latest available. This gives a wider range of options to compare, and might just mean that you’ll discover than your SAN is corrupting the 22:15 t-log backup.

The next function checks we’ve got at least one ‘anchoring’ full backup, picks the earliest point in time that backup covers, and then gets the latest point in time covered by the backup files, and returns a random point between those 2 extremes. This will be our Recovery Point Objective

$TimeToRestore = Get-PointInTime -BackupsObject $BackupObjects

We then filter out backup files to just those needed to his this point in time:

$Objective = Get-RestoreSet -BackupsObject $BackupObjects -TargetTime $TimeToRestore

Or if you did just want the latest point then you can:

$Objective = Get-RestoreSet -BackupsObject $BackupObjects -Latest

Now we deal with moving the restored database files to a different location:

$Objective = Get-FileRestoreMove -BackupsObject $Objective -DestinationPath e:\some\path

And now we run some tests before the ‘expensive’ time taking restore itself. First off we’ll check we’re not about to clobber another database:

Test-DatabaseExists -RestoreSQLServer $SQLconnection -DatabaseName $Objective[0].DatabaseName

Then we check we have enough space to restore. This includes checking for any file growth during the restore (if your transaction log grows to a stupid size during the day, then it’ll be grown to that size during the restore and sized down later on, so you need to accomdate the largest amount of space your database occupies, not just it’s final size):

Test-RestoreSpace -BackupsObject $Objective -RestoreSQLServer $SQLconnection -RestorePath e:\some\Path

And then we test the difference beteen the SQL Server version of the instance that did the backup and the SQL Server instance we’re asking to perform the restore. Microsoft state that restoring more that 2 major versions isn’t allowed, so we fail it in this case (non SQL Server backups aren’t supported (yet!))

Test-DBRestoreVersion -BackupsOject $Objective -RestoreSQLServer $SQLconnection

And finally we restore the database:

Restore-Database -BackupsObject $Objective -RestoreSQLServer $SQLconnection -RestoreTime $TimeToRestore

Now, we want to check the restore is fine. It is possible to restore a corrupt database with no errors! (Demo and example here) :

Test-Database -DatabaseName $Objective.Databasename -RestoreSQLServer $SQLconnection

And then clean up after ourselves:

Remove-Database -DatabaseName $Objective.Databasename -RestoreSQLServer $SQLconnection

Rinse, repeat ad infinitum. I’ve this process running 24×7 on a dedicated restore instance. On average I restore 80 databases a day and cover every production database in a 2 week windows (it’s random so not guaranteed, but I have a priority list that skews it!)

Currently I collect my statistics with some simple Send-MailMessage usage, but I want something more robust in this module, so thats on the list of things to get fixed before we go to 1.0.0.0 properly.

Hopefully that’s given some ideas on how to use the module. I’d love to hear any ideas on improvements or how you’d want to use it in your environment. Comment here, drop me an email, or ping me on twitter (accounts all linked top right).

Day -1 of 31 days of SQL Server backup and Restore using Powershell: SQL Server Powershell Components

Continuing my pre 31 Days  of SQL Server backup and Restore using Powershell into, today we’ll look at getting the PowerShell SQL Server components, and a quick mention of GUIs

All Microsoft operating systems since Windows XP have shipped with PowerShell either installed or as an option. At the time of writing the current version is 3.0 (Download PowerShell v3.0). (PowerShell 4.0 is currently in public beta, but won’t be covered in this series)

Version 3 introduced enough new features to make it worthwhile upgrading to, especially for automating jobs. These include:

  • Better Scheduled Job Integration
  • Improved Integrate Scripting Environment
  • Workflows

A basic install of PowerShell doesn’t include the SQL PowerShell module which allows you to access the SQL Management Objects. These are installed when you install the client tools from your SQL Server media. They are also now freely available as part of SQL Server 2012 Express Service Pack 1 (download here).

The SMO capabilities are the same across the PowerShell module in each SQL Server edition, the limitations in usage are controlled by the edition of SQL Server you connect to. This means you can install the free SQL Express version on your desktop to control your Enterprise edition Servers, and that scripts developed on Express will also work with Standard and Enterprise editions.

SMO also relies on  standard T-SQL calls, so as long as the version of SQL Server supports the functionality you can use it. For instance, all the scripts in this series will work back to SQL Server 2005 (and the basic backups will also work backwards to SQL Server 7 and 2000). But just because PowerShell support AlwaysOn availability groups, it doesn’t mean you can enable them on SQL Server 2005

PowerShell 3.0 ships with a fairly usable  Integrated Scripting Environment GUI. This gives you Intellisense, some basic debugging functionality and an easier to read interface to Get-Help

If you start developing more complex PowerShell scripts then a more fully featured IDE may be preferable. There are a number available, both free and paid for. Commonly used ones are:

Which one you prefer will mostly come down to personal preference, so I’d suggest trying them all and then choosing one.

Tomorrow I’ll lay out some of the reasons why I’ve moved away from T-SQL for a lot of my backup and restore automation.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

What should a SQL Server DBA be doing in a disaster?

What does a DBA do in a disaster?You’ve had a disaster at work, the data centre is off line, management are panicing and money is being lost. What should you be doing?

Short answer :
nothing that isn’t in the Disaster Recovery plan (you do have one of those don’t you?)

Longer answer:
Some time ago I got into work at my usual hour to notice a distinct lack of server admins at their desks and other early starters unable to log in to Active Directory, neither of which filled me with confidence for a quiet day working on writing some training material.

A quick walk down to the data centre found some worried looking electricians, some annoyed looking server admins and a distinct lack of server noise. Obviously something major had failed and taken out everything, so time to dig out the disaster recovery plans..

In previous roles I’ve had responsibility for hardware and infrastructure so the urge to jump in and help out was strong. But I could see that the correct guys were on the case, and that with the influx of panicing management another body was only going to slow things down and get in the way.

So I went back to my desk, got out a copy of the SQL Server Disaster Recovery plan and started refreshing (you do have an offline copy you can read without power, network or server infrastructure?) . This was the best place for me to be, and the best use of my time while things were being fixed, I was ensuring that:

  • I could be found easily  by anyone needing me
  • Would have information to hand as soon as needed. (which order to bring back the hardware back online, SAN dependencies, etc)
  • I could make a condensed checklist for what to do when things came back up based on time (ie; kill off an import that would fail due to a missing dependancy)
  • Keep out of the way of the people who were working hard to get their part of the disaster plan enacted. They really don’t need another person who doesn’t know what’s going on bugging them.
  • Work with management to work out the order for bringing back services
  • Make sure I had all the contact details I needed for the service owners who I’d have to liase with for final release (you just know that a disaster will happen while X is on leave, and you won’t have the number for Y who’s covering their role)

All of this meant that when the call came up that the physical layers were back up and running I quickly started bring up the SQL Server instances in an orderly fashion, and could quickly run through my checks before handing them over to the business users for clearance.

Disaster recovery plans take a long time to write and to get right, but if you’ve got it right then on the day it makes life go easier if you stick to it. When the business is suffering it’s not the right time to be improvising, or trying to work on something you’re not completely OK with, or the time to be trying out some new technique you’ve read on a random DBA’s blog.

Making sure you study it is even more important if it’s a disaster plan you didn’t write. If you ever get brought in to enact a disaster recovery plan you’ve never seen before and you don’t know the providence of, make sure you read it thoroughly before you start. You don’t want to get half way through and find out that you’re missing either a vital piece of infrastructure or software.

(Burnt server image courtesy of John/Roadhunter – http://www.flickr.com/photos/roadhunter/68017745/)

Powered by WordPress & Theme by Anders Norén