Musings of a Data professional

Stuart Moore

Tag: powershell Page 3 of 5

Day 18 of 31 Days of SQL Server Backup and Restore using PowerShell: Relocating files during Restore

So far during the restores portion of this series we’ve been assuming that we can restore our database’s file back to the same locations they were on the original server. Often this is not the case, even on a dedicated restore server:

  • there’s the possibility that the files already exist
  • a good chance that the restore server’s filesystems are laid out differently
  • you may want to be able to restore the same database to multiple points in time
  • or many other reasons

So today we’re going to look at how your can quickly relocate every file of a restored database without knowing anything about them prior to the restore.

To do this we’re going to use another method of the SMO Restore object, ReadFileList. This works in the same manner as the T-SQL RESTORE FILELISTONLY. When run against a backup file it will list all the database files contained within it. Under PowerShell this will return a DataTable object which we can loop through to identify every file, We can use this information, and details we want to provide to build a RelocateFile object, which we can then pass into the RelocateFiles method of Restore

For all the examples in todays post we’ll assume that the following PowerShell has been run to build the basic Restore environment:

Import-Module sqlps -DisableNameChecking
."c:\PSScripts\SQLBackupFunctions.ps1"

$backupfiles = @()
$backups = @()

$backupfiles += Get-SQLBackupFiles -FilePath c:\DiffRestore -SQLServer "Server1"

foreach($file in $BackupFiles){
    $backups +=Get-BackupContents -FilePath $file -SQLServer "Server1"
}

$RestoreFiles += $Backups | Where-Object {$_.BackupTypeDescription -eq "Database"} | Sort-Object LastLSN | Select-Object -First 1
$RestoreFiles += $Backups | Where-Object {$_.BackupTypeDescription -eq "Database Differential"} | Sort-Object LastLSN | Select-Object -First 1
$tmpLSN = $RestoreFiles | Measure-Object -Property LastLSN -Maximum
$RestoreFiles += $Backups | Where-Object {($_.LastLSN -ge $tmpLSNMaximum) -and ($_.BackupTypeDescription -eq "Transaction Log")}

 $restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore
$devicetype = [Microsoft.SqlServer.Management.Smo.DeviceType]::File

$HasThereBeenAnError = 0

As a basic example we’ll relocate all the database files in a backup file to c:\restorefiles\:

$restoredevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($RestoreFiles[0].FilePath,$devicetype)
$DBFiles = $restore.ReadFileList($SQLsvr)
if (($DBFiles | Measure-Object -Property IsPresent -average).Average -eq 1){
    foreach($File in $DBFiles){
	    $rf = new-object -typename Microsoft.SqlServer.Management.Smo.RelocateFile
	    $rf.LogicalFileName = $file.LogicalName
	    $rf.PhysicalFileName = $Path+(Split-Path $File.PhysicalName -Leaf)
	    $Restore.RelocateFiles.Add($rf)
	    Remove-Variable rf
    }
}else{
    Write-Output "Does Not contain all db files"
    break;
}

$Restore.sqlrestore($SQLSvr)

}

We use ReadFileList to read the contents of our backup file into $DBFiles . We then use this line to make sure the backup contains all the db files:

if (($RestoreFiles | Measure-Object -Property IsPresent -average).Average -eq 1){

This makes use of the fact that PowerShell treats $True as 1 and $False as 0 when you run a mathematical function across them. If all the files are present then the average should be 1, if any files are not present (IsPresent=False) then the average will be less than 1. This will happen if you pick a file that contains a File or FileGroup Backup. We’ll cover performing File and FileGroup restores in a later article, but will pretend they don’t exist for this one!

Assuming we’ve got a backup with all the files, we then loop through the data table. For each file we create a new RelocateFile object. We populate the LogicalFileName property directly and then build up a new path for the PhysicalFileName. We use the Split-Path cmdlet with it’s -leaf switch to split the filename and extension from the rest of the path (ie; Split-Path "c:\test\file.txt" -path would return file.txt). We then add the RelocateFile object to the RelocateFiles method of our Restore object.

The ReadFileList method can also be used check the size of the resulting restore. When SQL Server backs up a database it only writes out the data pages ignoring empty space in the data files. So a database that has 250GB allocated on disk, but only contain 120GB of data will produce a smaller backup than another database which 250GB allocated on disk, but only contain 240GB of data. However, when SQL Server restores a database it restores the database files to the size they were when the backup was taken, complete with empty space. So in the first case, despite only having 120GB of data in your database, you’re going to have to have 250GB of spare disk space to restore the database. And despite all the pleas on dba.stackexchange.com there’s no way around that. The other problem is that as you roll forward transaction backups the required amound of space on disk can change, so just because you can restore the full backup it doesn’t follow that you can restore the rest of the chain.

By running ReadFileList across all the files to be restored we can calculate the maximum filesize for each file in the database, and place them on filesystems with enough space or decided not to progress with a restore that’s doomed to failure. We can do this as follows:

$DBfiles = @{}
$DBfileSize = @()
foreach ($file in $RestoreFiles){
    $sqlsvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server($SQLServer)
    $restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore
    $devicetype = [Microsoft.SqlServer.Management.Smo.DeviceType]::File

    $restoredevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($file.FilePath,$devicetype)
    $restore.Devices.add($restoredevice)
    $dbfiles = $Restore.ReadFileList($sqlsvr)
    foreach ($tmpfile in $DBfiles){
        if (($dbfilesize.logicalName -Contains $tmpfile.logicalname) -eq $False){
            $tmpobj = New-Object System.Object
            $tmpobj | Add-Member -type NoteProperty -name LogicalName -value $tmpfile.LogicalName
            $tmpobj | Add-Member -type NoteProperty -name PhysicalName -value $tmpfile.PhysicalName
            $tmpobj | Add-Member -type NoteProperty -name Size -value $tmpfile.Size
            $DBFileSize += $tmpobj

        }
        if ($filet.size -gt ($dbfilesize | Where-Object {$_.LogicalName -eq $filet.LogicalName}).size){
            ($dbfilesize | Where-Object {$_.LogicalName -eq $filet.LogicalName}).size = $filet.size
        }

    }
    $restore.Devices.Remove($restoredevice)
}

$RestoreLocations = @("c:\restores\","d:\restores\","e:\restores\")
$LocationCount = 0
$MarginForError = 200*1024*1024
$FSO = New-Object -Com Scripting.FileSystemObject
$RelocateFile = @()
$CurrentlyNeeded = 0

$i = 0
while ($i -lt $RestoreLocations.Count){
    $CurrentlyNeeded[$i]=0
    $i++
}
$i=0

while ($i -lt $DBfileSize.count){
    if ($DBFileSize[$i].size + $MarginForError + $CurrentlyNeeded[$LocationCount] -gt $FSO.getdrive($(Split-Path $RestoreLocations[$LocationCount] -Qualifier)).AvailableSpace){
        $LocationCount++
        if($LocationCount = $RestoreLocations.Count){
            write-output "No Space anywhere for this file!"
            break;
        }
    }else{
        $rf = new-object -typename Microsoft.SqlServer.Management.Smo.RelocateFile
        $rf.LogicalFileName = $DBFileSize[$i].LogicalName
        $rf.PhysicalFileName = $RestoreLocations[$LocationCount]+(Split-Path $DBFileSize[$i].PhysicalName -Leaf)
        $RelocateFile += $rf
        Remove-Variable rf
        $CurrentlyNeeded[$LocationCount] += $DBfileSize[$i].size
        $i++
        $LocationCount = 0
    }
}

$RestoreServer = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server("RestoreServer")
$devicetype = [Microsoft.SqlServer.Management.Smo.DeviceType]::File
foreach ($backup in $RestoreFiles | sort-object LastLSN){
    $restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore
    $restoredevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($backup.filepath,$devicetype)
    $restore.devices.add($restoredevice)
    $restore.FileNumber = $backup.position
    $restore.Database = $backup.DatabaseName
    foreach ($f in $RelocateFile){
        $restore.RelocateFiles.add($f)
    }
    if ($backup.LastLSN -eq $backups[-1].LastLSN){
        $restore.NoRecovery = $False
    }else{
        $restore.NoRecovery = $True
    }
    $restore.SqlRestore($RestoreServer)

    $restore.Devices.Remove($restoredevice)
}

This snippet could potentially take a while to run as it has to read the filelist from every backup file passed into it, so if you’ve got 100 transaction logs to process after your Full Backup be prepared for a bit of a wait!

Quite a lot going on in here. We start off by declaring a hash table $DBfiles and an array $DBFileSize. Next we loop through our list of backup files in $RestoreFiles. For each one we call ReadFileList and put the restults into $dbfiles. We then loop through each file returned. If we’ve not seen the particular file before, we create an object holding it’s LogicalName, PhysicalName and Size. This Object is then added to the $DBFileSize array. If we have seen the file before then we compare the size returned by ReadFileList to the one in $DBFileSize, if the new value is larger we replace the value in $DBFileSize. And then we go back around the loop.

Once we’ve been through every file, we move on to actually checking our free space. We initialise a couple of variables first:

  • $RestoreLocations – An array holding all the allowed locations (1 per drive or we’ll get confused later please!)
  • $LocationCount – A counter to keep track of which Location from the above we’re currently looking at
  • $MarginForError – Just to leave some room for anything unexpected happening between checking and restoring, in the example this is set to 200MB
  • $FSO – FileScripting Object to use to get free drive space
  • $RelocateFile – An array which will hold out relocation information for pushing into our restore
  • $CurrentlyNeeded – A hash table we’ll programatically populate next. This is going to be used to keep track of the space our restore is going to use on each Location

Next we populate the hashtable $CurrentlyNeeded with an entry of 0 for every entry in our $RestoreLocations array. The we reset our counter $i to 0 for the next section.

We now loop through $Filesize. We’re using a While loop here as we want to loop through every possible location for every file in the array till we’ve either exhausted the possibilities, or we’ve found somewhere for it to go.

The first thing we do in the loop is to check to see if the size of the current file plus the amount we’re currently looking at using on the current drive plus our MarginForError is less than the amount of free space on the drive containing the folder. If there isn’t we increment our $LocationCount by 1, and go round the loop again with the same database file. If we increament $LocationCount beyond the end of our $RestoreLocations array we write out an error message and stop looping as we don’t have anywhere with enough free space to restore this file.

If on one of the loops we can find some space, then we build a new RelocateFile object ($rf) and populate with our file details. We add it to our array $RelocateFile, add the size of the restored file to the space used on the drive we’re going to put it on, and then increment $i so we move onto the next file, and reset $LocationCount to 0 so we start from the beginning of our locations again.

Next we use the same restore patten as we used in yesterday’s post, with the addition of:

    foreach ($f in $RelocateFile){
        $restore.RelocateFiles.add($f)
    }

to pass our newly created array of file relocations into the restore object one by one. Then all being well, we restore our database to it’s new home!

Today’s is a long post, but the core concepts are in the first half. The longer section afterwards is to give you some hints about how you can use this functionality to really streamline automating your backup.

Tomorrow, we’re going to start looking at restore to point in the database’s life other than the latest backup.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Day 17 of 31 Days of SQL Server Backup and Restore using PowerShell: More complex restore scenarios with Differentials

Building up yesterdays technique for ensuring we restore the correct backup files in the the correct order, today we’re going to look at what happens when you bring differentials into the mix, and maybe have more than one day’s backups for your target database.

Like yesterday, we’ll build a database to use as a demonstration:

create database DiffRestore;
go

use DiffRestore
go

create table steps(
StepID int identity(1,1),
StepTime datetime
)
go
backup database DiffRestore to disk='c:\DiffRestore\DiffRestore.bak'
go
insert into steps (steptime) values (current_timestamp)
go
backup log DiffRestore to disk='c:\DiffRestore\DiffRestore.trn'
go
insert into steps (steptime) values (current_timestamp)
go
backup database DiffRestore to disk='c:\DiffRestore\DiffRestore.bak'
go
insert into steps (steptime) values (current_timestamp)
go
backup log DiffRestore to disk='c:\DiffRestore\DiffRestore2.trn'
go
insert into steps (steptime) values (current_timestamp)
go
backup database DiffRestore to disk='c:\DiffRestore\DiffRestore.bak' with differential
go
insert into steps (steptime) values (current_timestamp)
go
backup log DiffRestore to disk='c:\DiffRestore\DiffRestore.trn'
go
insert into steps (steptime) values (current_timestamp)
go
backup database DiffRestore to disk='c:\DiffRestore\DiffRestore.bak' with differential
go
insert into steps (steptime) values (current_timestamp)
go
backup log DiffRestore to disk='c:\DiffRestore\DiffRestore.trn'
go

So we have a simple database, but with a more complicated backup structure. we have in this order:

  1. Full Backup
  2. Transaction Log Backup
  3. Full Backup
  4. Transaction Log Backup
  5. Differential Backup
  6. Transaction Log Backup
  7. Differential Backup
  8. Transaction Log Backup

So to restore this database to it’s latest state we need to restore Backup 4, then Backup 7 (Differential backups are cumulative, so we only need to restore the latest one) and then any following transaction log backups. In this case that’s just Backup 7.

In this case we’ll extract the backups we require, and build a new object to loop through for the actual restore: We’ll assume that the functions we created on Day 14 and Day 15 have been stored in the file SQLBackupFunctions.ps1

Import-Module sqlps -DisableNameChecking
."c:\PSScripts\SQLBackupFunctions.ps1"

$backupfiles = @()
$backups = @()

$backupfiles += Get-SQLBackupFiles -FilePath c:\DiffRestore -SQLServer "Server1"

foreach($file in $BackupFiles){
    $backups +=Get-BackupContents -FilePath $file -SQLServer "Server1"
}

$RestoreFiles = @()

$RestoreFiles += $Backups | Where-Object {$_.BackupTypeDescription -eq "Database"} | Sort-Object LastLSN | Select-Object -First 1
$RestoreFiles += $Backups | Where-Object {$_.BackupTypeDescription -eq "Database Differential"} | Sort-Object LastLSN | Select-Object -First 1
$tmpLSN = $RestoreFiles | Measure-Object -Property LastLSN -Maximum
$RestoreFiles += $Backups | Where-Object {($_.LastLSN -ge $tmpLSNMaximum) -and ($_.BackupTypeDescription -eq "Transaction Log")}

$RestoreServer = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server("RestoreServer")
$devicetype = [Microsoft.SqlServer.Management.Smo.DeviceType]::File
foreach ($backup in $RestoreFiles | sort-object LastLSN){
    $restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore
    $restoredevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($backup.filepath,$devicetype)
    $restore.devices.add($restoredevice)
    $restore.FileNumber = $backup.position
    $restore.Database = $backup.DatabaseName
    if ($backup.LastLSN -eq $backups[-1].LastLSN){
        $restore.NoRecovery = $False
    }else{
        $restore.NoRecovery = $True
    }
    $restore.SqlRestore($RestoreServer)

    $restore.Devices.Remove($restoredevice)
}

We do our normal scan of the filesystem to find out backup files and read their headers into our $Backups array. Then, as we know we want to restore to the latest point we can pick the Full backup and the Differential Backup with the highest LSN. We then get all Transaction Log backups with a greater LastLSN than the Differential Backup. All of these are added to an array $RestoreFiles, which we then step through restoring each file populating the required values from our array.

So far we’ve only looked at restoring onto servers where we’re assuming that our database doesn’t already exist. Tomorrow we’ll look at we can easily redirect all the files in a restored database if needed

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Day 16 of 31 Days of SQL Server Backup and Restore using PowerShell: A more robust transaction restore

Over the last 2 days we’ve looked at getting backup information from 2 sources:

and loading it into a PowerShell object. Today we’re going to put that object to use to show how we can use it to make sure we’re restoring files in the right order without having to rely on filenames or modified dates.

To setup we need a database and some backups. And we’ll make those backups a little complicated just to show off everything.

create database RobustRestore;
go

use robustrestore
go

create table steps(
StepID int id(1,1),
StepTime datetime
)
go
backup database RobustRestore to disk='c:\robust\robustrestore.bak'
go
insert into steps (steptime) values (current_timestamp)
go
backup log RobustRestore to disk='c:\robust\robustrestore.trn'
go
insert into steps (steptime) values (current_timestamp)
go
backup log RobustRestore to disk='c:\robust\robustrestore2.trn'
go
insert into steps (steptime) values (current_timestamp)
go
backup log RobustRestore to disk='c:\robust\robustrestore.trn'
go
insert into steps (steptime) values (current_timestamp)
go
backup log RobustRestore to disk='c:\robust\robustrestore2.trn'
go

We now have a backup chain, and our transaction logs are interleaved across 2 files. Not something you’d want to see in production, but suitably complex for a simple demonstration.

First we use our disk scraping technique to load our object with all the backup information from the files. The function definitions are here – Day 14 of 31 Days of SQL Server Backup and Restore using PowerShell: Reading Backup information into a custom object

$backupfiles = @()
$backups = @()

$backupfiles += Get-SQLBackupFiles -FilePath c:\robust -SQLServer "Server1"

foreach($file in $BackupFiles){
    $backups +=Get-BackupContents -FilePath $file -SQLServer "Server1"
}

Assuming you get no errors, then we now have a $backups object with 5 entries, 1 for each of the backups. So how are we going to use this to run a restore, and make sure we get everything in the right order. Well, let’s take a look at one of the entries:

PS C:\> $backups.item(1)

FilePath               : C:\robust\robustrestore.trn
BackupName             :
BackupDescription      :
BackupType             : 2
ExpirationDate         :
Compressed             : 0
Position               : 1
DeviceType             : 2
UserName               : Server1\Administrator
ServerName             : Server1
DatabaseName           : RobustRestore
DatabaseVersion        : 706
DatabaseCreationDate   : 18/09/2013 09:14:01
BackupSize             : 77824
FirstLSN               : 40000000007600178
LastLSN                : 40000000016300001
CheckpointLSN          : 40000000007600178
DatabaseBackupLSN      : 40000000007600178
BackupStartDate        : 18/09/2013 09:14:02
BackupFinishDate       : 18/09/2013 09:14:02
SortOrder              : 0
CodePage               : 0
UnicodeLocaleId        : 1033
UnicodeComparisonStyle : 196609
CompatibilityLevel     : 110
SoftwareVendorId       : 4608
SoftwareVersionMajor   : 11
SoftwareVersionMinor   : 0
SoftwareVersionBuild   : 2100
MachineName            : Server1
Flags                  : 512
BindingID              : 815b85e5-d051-4afd-a192-bb9fb2f7302c
RecoveryForkID         : df243f5b-a451-427e-8622-ecc7996a390a
Collation              : Latin1_General_CI_AS
FamilyGUID             : df243f5b-a451-427e-8622-ecc7996a390a
HasBulkLoggedData      : False
IsSnapshot             : False
IsReadOnly             : False
IsSingleUser           : False
HasBackupChecksums     : False
IsDamaged              : False
BeginsLogChain         : True
HasIncompleteMetaData  : False
IsForceOffline         : False
IsCopyOnly             : False
FirstRecoveryForkID    : df243f5b-a451-427e-8622-ecc7996a390a
ForkPointLSN           :
RecoveryModel          : FULL
DifferentialBaseLSN    :
DifferentialBaseGUID   :
BackupTypeDescription  : Transaction Log
BackupSetGUID          : 2d5edca4-7ed2-473d-b2ca-124f9cca8784
CompressedBackupSize   : 77824
Containment            : 0

So we can see that we have all the information we could want in there to build up our backups. We can use BackupType or BackupTypeDescription to determine what sort of restore (database or log) we need, we have FilePath to tell us where the file is, Position tells us where in that file the particular backup is and DatabaseName let’s us confirm that we’re restoring a backup from the correct database.

But how do we ensure we’re restoring the backups in the right order? We have 2 options here, Times or LSN. We could order by BackupStartDate or BackupFinishDate, and most times this would be absolutely fine. But if we have backups that take a particularly long time to complete you can end up with backups happening within the window of other backups.

A more reliable method is to use Log Sequence Numbers (LSNs). LSNs are strictly increasing number that SQL Server assigns to every transaction, and this is written in the transaction log when the transaction is committed. LSNs are not contiguous or of any use for arithmetic (ie; subtracting FirstLSN from LastLSN for a Transaction log backup won’t tell you how many transactions the backup contains). What they do guarantee is that if a_LSN is greate than b_LSN then a_LSN was committed to the transaction log AFTER b_LSN. So this makes it ideal for working out the order to restore our files. For more information on LSNs, please see Technet for an Introduction to Log Sequence Numbers

So knowing that, we can now look at out restore script:

$RestoreServer = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server("RestoreServer")
$devicetype = [Microsoft.SqlServer.Management.Smo.DeviceType]::File
foreach ($backup in $backups | sort-object LastLSN){
    $restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore
    $restoredevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($backup.filepath,$devicetype)
    $restore.devices.add($restoredevice)
    $restore.FileNumber = $backup.position
    $restore.Database = $backup.DatabaseName
    if ($backup.LastLSN -eq $backups[-1].LastLSN){
        $restore.NoRecovery = $False
    }else{
        $restore.NoRecovery = $True
    }
    $restore.SqlRestore($RestoreServer)

    $restore.Devices.Remove($restoredevice)
}

As you can see, we can now dynamically populate everything from our $Backups object. Except for the SQL Server we want to restore to.

In that example we were assuming that we only have backups from one database, and even just the one SQL Server in the folder we’d scanned. By altering the filtering in the foreach we can ensure we only see the details we want to see. For example, if you wanted to restore the backup of Database “dbprod” from SQL Server “DBServer1” then you’d do the following:

$RestoreServer = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server("RestoreServer")
$devicetype = [Microsoft.SqlServer.Management.Smo.DeviceType]::File
foreach ($backup in $backups | Where-Object {($_.DatabaseName -eq "dbprod") -and ($_.ServerName -eq "DBServer1")} | sort-object LastLSN  ){
    $restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore
    $restoredevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($backup.filepath,$devicetype)
    $restore.devices.add($restoredevice)
    $restore.FileNumber = $backup.position
    $restore.Database = $backup.DatabaseName
    if ($backup.LastLSN -eq $backups[-1].LastLSN){
        $restore.NoRecovery = $False
    }else{
        $restore.NoRecovery = $True
    }
    $restore.SqlRestore($RestoreServer)

    $restore.Devices.Remove($restoredevice)
}

From that it can be seen that we can really leverage PowerShell’s built in cmdlets to extract just the restore information we need from a whole range.

Tomorrow we’ll look at how we can work with multiple days worth of data, and also bring Differential backups into the picture.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Day 15 of 31 Days of SQL Server Backup and Restore using PowerShell: Getting Backup information from SQL Server

Yesterday we looked at scraping Backup information from SQL Server backup files on disk. There is another source of information available. Every backup in SQL Server is also recorded within tables in msdb. For the simple examples we’re looking at  so far, we just need to consider 2 of them. These are:

  • msdb.dbo.backupset – Holds the information about every backup
  • msdb.dbo.backupmediafamily – Holds the location

There are a number of other tables that also contain information that is needed when dealing with File or Filegroup backups, or with backups that have been spanned across multiple mediasets

The only issue with using this data as opposed to the data ‘scraped’ from disk, is that most people will manage their backup files outside of SQL Server, so there’s no guarantee that the file is where SQL Server left it. It could have been arhived, compressed or just plain deleted. This is the case even if you’re using the ‘Maintenance Cleanup Task’ in a Maintenance plan to delete your old backups. Running the ‘History Cleanup Task’ will prune your backup history down to your specifications (SQL Server won’t do this unless told to, and I have seen MSDB dbs that have ballooned to a ridiculous size with years of backup records!). Because of this you may want to filter the data you retrieve from SQL Server by your retention policy, and also checking that any file you reference does in fact exist will avoid problems.

The data returned from the ReadBackupHeader and msdb.dbo.backupset is VERY similar, but with differences in naming convention. And the enumerators for BackupType | Type need a CASE statement to translate them. The basic query we’ll be running is:

select
 a.backup_finish_date as BackupFinishDate,
 a.backup_set_uuid as BackupSetGUID,
 a.backup_size as BackupSize,
 a.backup_start_date as BackupStartDate,
 a.begins_log_chain as BeginsLogChain,
 a.checkpoint_lsn as CheckpointLSN,
 a.collation_name as Collation,
 a.compatibility_level as CompatibilityLevel,
 a.compressed_backup_size as CompressBackupSize,
 a.database_backup_lsn as DatabaseBackupLSN,
 a.database_creation_date as DatabaseCreationDate,
 a.database_name as DatabaseName,
 a.database_version as DatabaseVersion,
 a.description as BackupDescritpion,
 a.differential_base_guid as DifferentialBaseGUID,
 a.differential_base_lsn as DifferentialBaseLSN,
 a.expiration_date as ExpirationDate,
 a.family_guid as FamilyGUID,
 a.first_lsn as FirstLSN,
 a.first_recovery_fork_guid as FirstRecoveryForkID,
 a.flags as Flags,
 a.fork_point_lsn as ForkPointLSN,
 a.has_backup_checksums as HasBackupChecksums,
 a.has_bulk_logged_data as HasBulkLoggedData,
 a.has_incomplete_metadata as HasIncompleteMetaData,
 a.is_copy_only as IsCopyOnly,
 a.is_damaged as IsDamaged,
 a.is_force_offline as IsForceOffline,
 a.is_password_protected,
 a.is_readonly as IsReadOnly,
 a.is_single_user as IsSingleUser,
 a.is_snapshot as IsSnapshot,
 a.last_family_number,
 a.last_lsn as LastLSN,
 a.last_media_number,
 a.machine_name as MachineName,
 a.name as BackupName,
 a.position as Position,
 a.recovery_model as RecoveryModel,
 a.server_name as ServeName,
 a.software_build_version as SoftwareVersionBuild,
 a.software_major_version as SoftwareVersionMajor,
 a.software_minor_version as SoftwareVersionMinor,
 a.sort_order as SortOrder,
 a.unicode_compare_style as UnicodeComparisonStyle,
 a.unicode_locale as UnicodeLocalID,
 a.user_name as UserName,
 case a.type
	when 'D' then 'Database'
	when 'I' then 'Differential database'
	when 'L' then 'Log'
	when 'F' then 'File or filegroup'
	when 'G' then 'Differential file'
	when 'P' then 'Partial'
	when 'Q' then 'Differential partial'
 end as BackupTypeDescription,
 case [type]
	when 'D' then 1
	when 'I' then 5
	when 'L' then 2
	when 'F' then 4
	when 'G' then 6
	when 'P' then 7
	when 'Q' then 8
 end as BackupType,
 b.physical_device_name as 'FilePath'
 from
    msdb.dbo.backupset a inner join msdb.dbo.backupmediafamily
        on a.media_set_id=b.media_set_id B

To fill our data from SQL Server we’ll create 2 functions:

  1. Get-SQLServerBackups
  2. Test-SQLServerBackupExistence

The first will retrieve a set of backup information from a specified SQL Sever instance, optionally filtering the data by a specific date or by only asking for the last x days.

Our 2nd function will take in an instance of our $Backups object, and will step through testing to see each file exists, removing those that don’t.

So with our first function:

function Get-SQLServerBackups
{param([string]$SQLServer,[int]$ExcludeDaysOlder=0,[string]$BackupNewerThan='01/01/1900')
push-location
import-module "SQLPS" -DisableNameChecking
pop-location

$sqlsvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server($SQLServer)

$sql="select a.backup_finish_date as BackupFinishDate,
 a.backup_set_uuid as BackupSetGUID,
 a.backup_size as BackupSize,
 a.backup_start_date as BackupStartDate,
 a.begins_log_chain as BeginsLogChain,
 a.checkpoint_lsn as CheckpointLSN,
 a.collation_name as Collation,
 a.compatibility_level as CompatibilityLevel,
 a.compressed_backup_size as CompressBackupSize,
 a.database_backup_lsn as DatabaseBackupLSN,
 a.database_creation_date as DatabaseCreationDate,
 a.database_name as DatabaseName,
 a.database_version as DatabaseVersion,
 a.description as BackupDescritpion,
 a.differential_base_guid as DifferentialBaseGUID,
 a.differential_base_lsn as DifferentialBaseLSN,
 a.expiration_date as ExpirationDate,
 a.family_guid as FamilyGUID,
 a.first_lsn as FirstLSN,
 a.first_recovery_fork_guid as FirstRecoveryForkID,
 a.flags as Flags,
 a.fork_point_lsn as ForkPointLSN,
 a.has_backup_checksums as HasBackupChecksums,
 a.has_bulk_logged_data as HasBulkLoggedData,
 a.has_incomplete_metadata as HasIncompleteMetaData,
 a.is_copy_only as IsCopyOnly,
 a.is_damaged as IsDamaged,
 a.is_force_offline as IsForceOffline,
 a.is_password_protected,
 a.is_readonly as IsReadOnly,
 a.is_single_user as IsSingleUser,
 a.is_snapshot as IsSnapshot,
 a.last_family_number,
 a.last_lsn as LastLSN,
 a.last_media_number,
 a.machine_name as MachineName,
 a.name as BackupName,
 a.position as Position,
 a.recovery_model as RecoveryModel,
 a.server_name as ServeName,
 a.software_build_version as SoftwareVersionBuild,
 a.software_major_version as SoftwareVersionMajor,
 a.software_minor_version as SoftwareVersionMinor,
 a.sort_order as SortOrder,
 a.unicode_compare_style as UnicodeComparisonStyle,
 a.unicode_locale as UnicodeLocalID,
 a.user_name as UserName,
 case a.type
	when 'D' then 'Database'
	when 'I' then 'Differential database'
	when 'L' then 'Log'
	when 'F' then 'File or filegroup'
	when 'G' then 'Differential file'
	when 'P' then 'Partial'
	when 'Q' then 'Differential partial'
	end as BackupTypeDescription,
	case [type]
	when 'D' then 1
	when 'I' then 5
	when 'L' then 2
	when 'F' then 4
	when 'G' then 6
	when 'P' then 7
	when 'Q' then 8
	end as BackupType,
	b.physical_device_name as 'FilePath'
	 from msdb.dbo.backupset a inner join msdb.dbo.backupmediafamily b on a.media_set_id=b.media_set_id
where 1=1"

if ($ExcludeDaysOlder -ne 0){
    $sql = $sql+" and a.backup_start_date>dateadd(dd,-"+$ExcludeDaysOlder.tostring()+",current_timestamp)"
}

if ($BackupNewerThan -gt '01/01/1900'){
    $sql = $sql+" and a.backup_start_date>'"+$BackupNewerThan+"'"
}

$dt = Invoke-Sqlcmd -ServerInstance $SQLServer -Query $sql

return $dt
}

This is a very simple function, it just looks huge due to the SQL Statement $sql. If the parameters are supplied then we append the appropriate where filter. Then the query is sent to the SQL Server using Invoke-SQL cmdlet, this returns the data in a DataTable which we then return.

And then our second function:

function Test-SQLServerBackupExistence
{param ($BackupArray)

Set-location c:\

foreach ($row in $BackupArray.filepath | sort -Unique){
    if ((Test-Path $row) -ne $true){
        $BackupArray = $BackupArray | Where-Object {$_.FilePath -ne $row}
     }
}
return $BackupArray
}

This takes in an array as a parameter. By piping the $BackupArray.FilePath property through Sort -unique we only need to check each file once. If it doesn’t exist then we remove all ‘rows’ with that FilePath from the array. We then return the remains of the array.

The one drawback with this method is that if the same file is being used repeatedly for backups, then all those backups will stay in the array as we are only checking the existence of the file not seeing what it actually contains.

To load the data we’d call the functions as follows:

$Backups = Get-SQLServerBackups -SQLServer Server1 -ExcludeDaysOlder 7
$Backups = Test-SQLSeverBackupExistence $Backups

As we’ve ensured that we’ve kept the same structure for the Data.Tables returned from the disk scrape and the SQL query we can combine them easily like so:

$Backups = Get-SQLServerBackups -SQLServer Server1 -ExcludeDaysOlder 7
$Backups = Test-SQLSeverBackupExistence $Backups
$BackupFiles += Get-SQLBackupFiles -FilePath c:\psbackups2 -SQLServer "Server2"
foreach($File in $BackupFiles){
    $Backups += Get-BackupContents -FilePath $Sile -SQLServer "Server1"
}

As we’re using a standard PowerShell Data Table we can also us standard PowerShell cmdlets to save it to disk, and then reload it at a later date. This means we can scrape disks overnight during periods of low usage and then load it up during the day. Or we can update our records from SQL Server on a daily basis to speed the process up.

To save the information:

$Backups | Export-CliXML c:\backup-store.xml

To load the information:

$Backups = Import-CliXML c:\backup-store.xml

Tomorrow we’ll be looking at how we can use this new objects to ensure we restore the right files in the right order in a more robust manner than just relying on file names and last modified dates.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Day 13 of 31 Days of SQL Server Backup and Restore using PowerShell: Reading the contents of a Backup File

Yesterday we saw how easy it is to restore a SQL Server database from backups with PowerShell if we have a folder that contains:

  • Only SQL backup files
  • All files are in chronological/LSN order
  • The oldest file is the Full backup
  • There are no differential or partial backups in the mix
  • Only one database’s backups
  • Each backup file only contains 1 backups

Unfortunately we don’t always get that. So we need to be able to find out what we have and work out the the best way to restore our database. Over the next 2 days I’m going to introduce how we can read all the files we have, extract the information we need and then store it in a custom PowerShell object so we can manipulate it for ourselves. We will then look at how we can populate this object if we’re lucky enough to also have access to the original server.

Today we’ll look at reading the contents of the backup files and finding out what’s inside.

Let’s start with just working out which files in a folder are actually of interest to us. Let’s start with the worst case scenario; a folder full of files, some of which may be SQL Server backups of one kind or another, but someone hasn’t put file extensions on or has been very lax with which ones they have done. Like so:

PS C:\psbackups> Get-ChildItem

    Directory: C:\psbackups

Mode          LastWriteTime     Length Name
----          -------------     ------ -----

-a---  14/09/2013     07:46          0 file1
-a---  14/09/2013     07:46    5605888 file2

PS C:\psbackups>

OK, in this example you can probably deduce which one is not a backup file! But if you had folders full of these how would you work out which ones were of interest? Well, you’d use the ReadMediaHeader method of your $Restore object to quickly try to read the header that SQL Server writes at the start of all it’s backup files. This is a very quick operation, even on huge backup files as it’s at the start of the file. Unfortunately, ReadMediaHeader doesn’t return a nice “No this isn’t a SQL Server backup” message, so we have to wrap it in a Try, Catch, Finally block and handle it ourselves:

import-module "SQLPS" -DisableNameChecking

$sqlsvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server("WIN-C0BP65U3D4G")
$restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore
$devicetype = [Microsoft.SqlServer.Management.Smo.DeviceType]::File

$files = gci C:\psbackups

$BackupFiles = @()

foreach ($file in $files){
    $restoredevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($file.FullName,$devicetype)

    $restore.Devices.add($restoredevice)
    $errcnt=0
    try{
        $restore.ReadMediaHeader($sqlsvr)
    }
    catch [System.Exception]{
        write-output "$file is not a sql backup file `n"
        $errcnt =1
    }
    finally {
        if ($errcnt -ne 1){
            write-output "$file is a sql backup file `n"
            $BackupFiles += $file
        }
        $errcnt = 0
    }
    $restore.Devices.remove($restoredevice)
    Remove-Variable restoredevice
}

We add the positive matches to a $BackupFiles array so we can refer to them later.

So, what can you do if someone’s running their backups like this:

backup database psdb1 to disk='c:\psbackups\psbd1_backup.bak'
backup log psdb1 to disk='c:\psbackups\psbd1_backup.bak'
backup database psdb1 to disk='c:\psbackups\psbd1_backup.bak'
backup log psdb1 to disk='c:\psbackups\psbd1_backup.bak'

I have seen something similar out in the wild, and they had a BIG backup file for quite a small database, and making sure you restored the version you wanted took a lot of looking up. In this case we use PowerShell’s implement of RESTORE HEADERONLY, ReadBackupHeader:

Import-Module "SQLPS" -DisableNameChecking

$SQLSvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server("WIN-C0BP65U3D4G")
$Restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore
$DeviceType = [Microsoft.SqlServer.Management.Smo.DeviceType]::File

$file = gci c:\psbackups\psbd1_backup.bak

$RestoreDevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($file.FullName,$devicetype)
$Restore.Devices.Ddd($RestoreDevice)
$Restore.ReadBackupHeader($sqlsvr) | Format-List -Property DatabaseName, BackupStartDate, FirstLSN, BackupTypeDescription, Position
$Restore.Devices.Remove($restoredevice)

Here we pipe the output from ReadBackupHeader through Format-List to extract just the values we want:

DatabaseName          : psdb1
BackupStartDate       : 16/09/2013 07:18:37
FirstLSN              : 40000000049700037
BackupTypeDescription : Database
Position              : 1

DatabaseName          : psdb1
BackupStartDate       : 16/09/2013 07:18:37
FirstLSN              : 40000000049000001
BackupTypeDescription : Transaction Log
Position              : 2

DatabaseName          : psdb1
BackupStartDate       : 16/09/2013 07:18:37
FirstLSN              : 41000000004700037
BackupTypeDescription : Database
Position              : 3

DatabaseName          : psdb1
BackupStartDate       : 16/09/2013 07:18:38
FirstLSN              : 41000000002600001
BackupTypeDescription : Transaction Log
Position              : 4

ReadBackupHeader returns the same information as RESTORE HEADERONLY does.

Not that we have the position of the backups with the file we can restore the one we want by using the FileNumber property:

Import-Module "SQLPS" -DisableNameChecking

$SQLSvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server("WIN-C0BP65U3D4G")
$Restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore
$DeviceType = [Microsoft.SqlServer.Management.Smo.DeviceType]::File

$file = gci c:\psbackups\psbd1_backup.bak

$restoredevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($file.FullName,$devicetype)
$Restore.Devices.add($restoredevice)
$Restore.DatabaseName = "psdb1"
$Restore.FileNumber = 3
$Restore.SQLRestore($SQLSvr)
$restore.Devices.Remove($restoredevice)

This will restore the most recent full backup from our example

So now we can find valid SQL Server backups on a given filesystem, and then find out what’s contained within them. Tomorrow we’ll look at taking this information, and putting it into a custom object so we can collate the results from multiple files and build up our restore plan.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Day 12 of 31 Days of SQL Server Backup and Restore using PowerShell: Simple Transaction Restore

Yesterday we looked at simplest possible example of a SQL Server restore; restoring a single backup file. Now we’re going to get a little bit smarter, and also restore transaction logs.

So the scenario is that you have folder containing your full database backup and the subsequent transaction logs, like so:

PS C:\> gci c:\psbackups

    Directory: C:\psbackups

Mode                LastWriteTime     Length Name
----                -------------     ------ ----
-a---        13/09/2013     11:23    2648576 psbd1_backup_201309131100.bak
-a---        13/09/2013     11:23    1800704 psbd1_backup_201309131105.trn
-a---        13/09/2013     11:23      90624 psbd1_backup_201309131110.trn
-a---        13/09/2013     11:23      90624 psbd1_backup_201309131115.trn

PS C:\>

So how do we do this? Well, exactly how we’d do it in T-SQL. Restore the Full backup with NoRecovery and then restore all the transaction log backups in order, also with no recovery, and then recover the database after the last one on has been restored.

Normally that means writing lots of lines of T-SQL (it’s always just before the next full backup isn’t it, and you have 92 transaction logs to restore!). Thanks to PowerShell’s objects we can do better than than.

So to the script:

Import-Module SQLPS -DisableNameChecking

$SQLSvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server -ArgumentList ("Server1")
$Restore = New-Object -TypeName Microsoft.SQLServer.Management.Smo.Restore
$DeviceType =[Microsoft.SqlServer.Management.Smo.DeviceType]::File

$Restore.Database='psrestore'
$Restore.NoRecovery = $TRUE

$Files = Get-ChildItem C:\psbackups\
$i=1

foreach ($File in $Files){
	if($i -eq $Files.count){
	    $Restore.NoRecovery= $FALSE
	}
	$BackupDevice = New-Object Microsoft.SqlServer.Management.Smo.BackupDeviceItem -ArgumentList $file.FullName, $DeviceType
	$Restore.Devices.Add($BackupDevice)
	$Restore.SQLRestore($SQLSvr)
	$Restore.Devices.Remove($BackupDevice)
	remove-variable BackupDevice
	$i++
}

First we do our normal setup, loading the SMO module and creating some basic objects. Next we set the name of the database we’re restoring, and also tell our $Restore object that we don’t want to recover the database at the end of the Restore (the default option is to recover them).

We use the Get-ChildItem cmdlet to load the contents of our backup directory into an object called $Files, and create a counter variable $i and initialise it to 1.

We now step through each $File in our $Files object. We use the $Files.count property and $i to work out when we are restoring the last file in the list, and when we are we set $Restore.NoRecovery to $FALSE, thereby recovering the database after the restore.

We restore the file and increment the counter variable by 1, and then go through the loop again if we aren’t on the last file.

We can do the same with the Restore-SQLDatabase cmdlet introduced with SQL Server 2012

Import-Module SQLPS -DisableNameChecking

$ServerName = "Server1"

$RestoreDatabase='psrestore'
$Files = Get-ChildItem C:\psbackups\
$i=1

foreach ($File in $Files){
    Restore-SqlDatabase -ServerInstance  $ServerName -Database $RestoreDatabase -BackupFile $File.FullName -NoRecovery

    if($i -eq $Files.count){
    Restore-SqlDatabase -ServerInstance  $ServerName -Database $RestoreDatabase -BackupFile $File.FullName
    }
    $i++
}

When we use this cmdlet we can’t set the NoRecovery property as we did with SMO, as it’s not a parameter but a switch. So we us the same logic to find when we’re restoring the last file, and then run the cmdlet again without the -NoRecovery switch.

This fairly straightforward, and simple to write it you know that you only have the files for 1 database in a folder, and they’re in correct chronological/LSN order with the full backup being the oldest. But what do you do if you just have a folder full of random files from multiple databases from multiple days with a mix of full, differential and transaction backups? Well, that’s what we’re going to explore over the next couple of days. Tomorrow we’ll begin by looking at how to look inside the backup files.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Day 11 of 31 Days of SQL Server Backup and Restore using PowerShell: Simple Database Restore

As we’ve seen in the past 10 days, performing SQL Server backups with PowerShell is pretty straight forward. Now we’re going to move on to Restores. This is where PowerShell really starts to shine. SQL Server doesn’t offer any real options for automating restores of Databases for checking or verification. You can schedule specific pieces of T-SQL, but that only applies to specific databases, or you end up tying yourself in knot with dynamic SQL.

As we did with backups we’ll start off with the simplest form of restore to introduce the basic concepts we’ll be building on. Here we’re just going to take a full database backup where we know which database it’s from, restore and recover it.

So here’s the scripts, first off here’s the full SMO version:

import-module "SQLPS" -DisableNameChecking

$sqlsvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server("Server1")
$restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore
$devicetype = [Microsoft.SqlServer.Management.Smo.DeviceType]::File

$backupname = "C:\psbackups\psrestore.bak"

$restoredevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($backupname,$devicetype)

$restore.Database = "psrestore"
$restore.ReplaceDatabase = $True
$restore.Devices.add($restoredevice)
$restore.sqlrestore($sqlsvr)

As you can see this is pretty much identical to the basic PowerShell backup script from Day 1. The only real differences are that we have an $restore object of type Microsoft.SqlServer.Management.Smo.Restore and we’re calling a sqlrestore method, all the device creation methods are exactly the same.

And here’s the simple form of the Restore-SQLDatabase cmdlet:

Restore-SqlDatabase -ServerInstance "Server1" -Database "psrestore" -BackupFile "C:\psbackups\psrestore.bak"

Which is even simpler.

So far so simple. But as we’ll explore over the next days this gets a lot more complicated when you try to make it flexible, able to cope with complex restores and robust enough to ensure a properly recovered database.

The next step towards that is tomorrow’s post on restoring transaction logs.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Day 10 of 31 Days of SQL Server Backup and Restore using PowerShell: Automating with Task Scheduler

Yesterday we looked at scheduling PowerShell scripts with SQL Sever Agent, but what do you do if you’re running SQL Server Express and don’t have that luxury?

Well, it turns out that it’s pretty easy to schedule PowerShell taks with the Windows Task Scheduler.

Images in this post are from Task Scheduler on Windows 2012, but they are pretty much identical back to Windows 2008.

Task Scheduler offers a nice “Create Basic Task Wizard”, which quickly runs you through the pertinent options from the full task creation process,b ut unfortunately this doesn’t let you set a more complicated schedule than once a day:

basic-task-wizard

To do that you need to create a full scheduled task. Stepping through process isn’t too complicated, but just a few gotchas to watch out for.

schedule-properties

The first pane you’ll come to is the general properties one. Here you can provide a title for you task, and change the user under who’s security context the job will run. You’ll need to tick the box “Run whether user is logged on or not” otherwise it won’t run when the user is logged off. If you run it as any user other than the one creating the task, then any time you modify the task you’ll be prompted to provide the users credentials:

schedule-password

This is to prevent someone modifying a job to abuse any elevated privileges it runs under.

The next step along it ‘Triggers’:

Schedule-multiple-triggers

here we can build up the conditions under which our scheduled task will fire. As shown in the above screenshot you can set multiple triggers for the same task. For instance, you might want to run transaction log backups every 15 minutes durig 9-5 Production hours, but only every 2 hours outside of that window.

You build or Edit a trigger with the following screen:

schedule-triggers

You can set pretty much any schedule you want here. Don’t feel constrained by the values in the drop down boxes, you can type your own values if you want (for instance every 7 minutes).

Once we have our triggers we can define the actions that we want to take when they trigger:

schedule-action

Just like with Trigger you can have multiple Actions that will all fire on a trigger:

schedule-action-detail

Note that the Action we take is to start PowerShell, not to run out script. This is because .ps1 PowerShell scripts are not themselves executable, but need to be passed to PowerShell to run. So PowerShell is our program, here I’m relying on the system %PATH% variable to find it, but if you want to ensure the correct exe is started you can provide the full path to the exe. So as PowerShell is fired up we pass our script in as an Argument. As long as all file references in your scripts are fully qualified you probably don’t need to populate the “Start In” box, but if you are loading custom modules or other  sub scripts then it may be easier to start the script in a specific location.

Now as this is a series on PowerShell it’d a be a bit remiss not to show you how to create a new Scheduled task using PowerShell itself. The bad news is, that this is only implemented in Windows 2012 and won’t be back ported:

$action = New-ScheduledTaskAction -Execute PowerShell.exe -Argument "C:\scripts\backup_all_dbs.ps1"
$trigger = New-ScheduledTaskTrigger -once -at 00:01 -RepetitionInterval (New-TimeSpan -Minutes 30) -RepetitionDuration (new-timespan -Days 1)
Register-ScheduledTask -TaskName "SQL Backup" -TaskPath "\" -Action $action -Trigger $trigger -user "Domain\User" -Password "p@55w0rd"

In versions prior you have to use schtasks. schtasks has a large and flexible syntax which is beyond the scope of this article, but is well documented here with plenty of examples. For completeness here is the schtasks command line for the above PowerShell:

schtasks /create /sc minute /mo 30 /tn "SQL Backup" /tr "powershell.exe -c c:\scripts\SQL_backup.ps1" /ru Domain\User

When you run this, schtask will prompt you for the users password.

Tomorrow we’ll begin to look at using PowerShell to restore SQL Server databases

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here

Day 9 of 31 Days of SQL Server Backup and Restore using PowerShell: Automating with SQL Server Agent

So far in this series we’ve looked a variety of ways you can use PowerShell to run your SQL Server backups, but we’ve not touched on how you can automate them. That is going to be the topic of the next 2 posts. Today we’ll look at automating the scripts with SQL Server Agent, and tomorrow we’ll look at doing it with Task Scheduler.

SQL Agent has been able to schedule PowerShell jobs since 2008, and the basic method hasn’t changed. Screenshots here are from SQL Server 2012:

PowerShell is just another Step type in SQL Server you can pick from the dropdown:

New job step for PowerShell SQL Agent task

And then you can enter your PowerShell into the box provided. The Open button allows you to browse for existing files, but only puts the current content into the box:

New-job-step-powershell

Here I’ve used the basic looped database backup script from Day 2. Once crucial point to spot here, is that by default the script will be run as the “SQL Server Agent Service Account”. Now if you’re following SQL Server best practice, then this account won’t have much access to your local disks or to network shares, which is likely to cause your backup script to fail.

The way around this is to create a SQL Server Agent Proxy for PowerShell, and associate this with a credential that does have access to your backup locations, which is following best security practice.

The other problem that can occur is that so far we have incorporated very little checking and error reporting into our scripts. This means that if an error occurs then you a likely to get back less than useful error messages. Looking through the SQL Server errorlog can often provide some more details. I’ll be covering checking and error reporting in more detail towards the end of this series.

If you’d rather run from saved .ps1 PowerShell scripts then you’ll need to use the “Operating system (CmdExec”) Step type:

new-job-step-cmdexec

You can then enter the call to your script into the Command box. Note that we are actually calling PowerShell.exe and then passing our script in as a parameter. You can’t call a PowerShell script directly.

new-job-step-cmdexec-powershell2

The same security restrictions apply to this step as to the PowerShell one, so you may have to create a second SQL Server Agent Proxy or grant CmdExec to an existing proxy.

Now you’ve created you job you can schedule it like any other SQL Server job.

So this is all great if you’ve got SQL Server Agent, but what about if you’re using SQL Server Express and don’t have SQL Agent to schedule your backups? Well, we’ll cover scheduling PowerShell with Windows Task Scheduler tomorrow

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Day 8 of 31 Days of SQL Server Backup and Restore using PowerShell: Asynchronous Backups – Part 2 – PowerShell Jobs

Yesterday we looked at using PowerShell Events to track asynchronous backups to fire off concurrent backup sessions from the same script. All of those backups were being run independently of each other, so what would you do if you want to run batches of backups concurrently (ie; All dbs on Server A and all dbs on Server B concurrently)?

That’s where PowerShell Jobs come in. Introduced in PowerShell 3.0 they let you fire off a block of script and have it run in the background.  You can then wait for the jobs to finish, or poll for them yourself.

We’ll use the example above of wanting to backup all the databases on 2 SQL Server installs concurrently:

$JobFunction={function backup_server_dbs
{
    param([String]$ServerName)

    Import-Module "SQLPS" -DisableNameChecking

    $SQLSvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server($ServerName)

    $Db = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Database

    foreach ($Db in $SQLSvr.Databases | Where-Object {$_.Name -ne "tempdb"}){
        $Backup = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Backup
        $Backup.Action = [Microsoft.SQLServer.Management.SMO.BackupActionType]::Database
        $BackupSetDescription = "Full Back of "+$Db.Name
        $Backup.Database = $Db.Name

        $BackupName = "c:\psbackups\"+$Db.Name+"_"+[DateTime]::Now.ToString("yyyyMMdd_HHmmss")+".bak"
        $DeviceType = [Microsoft.SqlServer.Management.Smo.DeviceType]::File
        $BackupDevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($BackupName,$DeviceType)
        try{
            Backup-SqlDatabase -ServerInstance $ServerName -Database $Db.Name -BackupFile $BackupName -BackupSetDescription $BackupSetDescription
        }
        Catch [System.Exception]{
            $output += "`nError with $db.name at "+(get-date)
            $output += ($error[0].ToString())
            $error =1
        }
        Finally{
            if ($error -ne 1){
                $output += "`n Finished backup of $db.name at "+(get-date)
            }
            $error = 0
        }
    }
    return $output
}
} 

$servers = @("Server1","Server2")
$jobs = @()

foreach ($servername in $servers){
    $jobs += Start-job -ScriptBlock {backup_server_dbs $args[0]} -InitializationScript $JobFunction -ArgumentList($servername)
}

$jobs | Wait-Job

$FullOuput = ""
foreach ($job in $jobs){
    Receive-Job $job | Tee-Object -Variable joutput
    $FullOuput += $joutput
    Remove-Job $job
}

Write-Output $FullOutput

This script starts off a little differently to every backup script we’ve looked at so far in this series. What we do first is to create a variable $JobFunction which holds a function definition for a function backup_server_dbs. The reason for this will be explained slightly later on in the script.

backup_server_dbs accepts one paramater, a string called $ServerName. And then it should look pretty famaliar from our previous articles. We build a SQL Server connection, loop through all of it’s databases (ignoring poor TempDB) and backing them up. The main difference it the inclusions of a Try, Catch, Finally block around the actual backup. If you’ve not come across this before, it’s a simple way of handling errors in PowerShell. At a high level we are:

  • Try: Try doing something
  • Catch: If “trying” has caused a error, Catch it
  • Finally: Whatever happened when trying, Finally we’ll do this

So here we Try to perform the backup, if there’s an error we Catch it and add the error message to our output sting, and Finally if we’ve had no errors we append the completion time to our output string.

Once we’ve attmempted to backup all of our databases we exit the function returning out $output variable.

Moving on from the function we set up 2 array variables, the first $Servers holding our server names, and the 2nd one is unpopulated but is going to hold the jobs we create to run the backups.

Using foreach we loop through each server we’ve provided, and start a new job for each one with the Start-Job cmdlet.

And this is the reason we created a variable holding a function. This is so we can pass it into Start-Job as an InitializationScript and then call the function contained within it in the ScriptBlock parameter. A slightly cleaner way of doing this is by using a secondary PowerShell script file and passing the file location in to InitializationScript, but that’s a little hard to show in a simple blog posting, and this way also keeps everything synced in one file.

Once we’ve submitted the jobs we pass the array containing them ($jobs) into Wait-Job. This pauses execution until all the jobs in $jobs have completed.

Once all the jobs have completed we ForEach through echo $job in the $jobs array, and use Receive-Job cmdlet to get the return information from the job, pass it throught Tee-Object to grab the $output into a temporary holder, and then append to a full output variable, and then we pass it to Remove-Job to clear it from the jobs list

And finally we output the Success and Error Messages back to the console.

This basic approach can be used for anytime you want batches of work to be running at the same time. By extending the function to take a Filegroup name and a Backup location you could backup each filegroup to a different set of disk therebay maximising your throughput by not just hitting a single controller, NIC or HBA:

Import-Module SQLPS -DisableNameChecking

$JobFunction={function backup_server_dbs
{param([String]$ServerName, [String]$DatabaseName, [String]$FileGroup, [String]$BackupLocation)
    
    Import-Module "SQLPS" -DisableNameChecking

    $BackupSetDescription = "Filegroup backup of Filegroup "+$FileGroup+" of "+$DatabaseName
    
    $BackupName = $BackupLocation+"\"+$DatabaseName+"_"+$FileGroup+"_"+[DateTime]::Now.ToString("yyyyMMdd_HHmmss")+".bak"
      try{
        Backup-SqlDatabase -ServerInstance $ServerName -Database "fg_test" -BackupFile $BackupName -BackupSetDescription $BackupSetDescription -DatabaseFileGroup $FileGroup -BackupAction Files
    }
    Catch [System.Exception]{
        $output += "`nerror with $DatabaseName at "+(get-date)
        $output += "`n vars = $servername, $DatabaseName, $BackupName, $BackupLocation" 
        $output += ($error[0].ToString())
        $error =1
    }
    Finally{
        if ($error -ne 1){
            $output += "`n finished backup of $DatabaseName "+(get-date)
        }
        $error = 0
    }

    return $output
}
} 

$servername = "Server1"
$DatabaseName = "fg_test"
$BackupLocations = @("c:\psbackups\fg1","\\Server2\shareA","\\Server3\ShareB")
$jobs = @()
$i=0

$SQLSvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server($ServerName)
$Db = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Database
$db = $SQLSvr.Databases.Item($DatabaseName)

foreach ($FileGroup in $db.FileGroups){
     $jobs += Start-job -ScriptBlock {backup_server_dbs $args[0] $args[1] $args[2] $args[3]} -InitializationScript $JobFunction -ArgumentList ($servername,$DatabaseName, $Filegroup.Name,  $BackupLocations[$i])
 
    $i++
}


$jobs | wait-job

$FullOuput = ""
foreach ($job in $jobs){
    receive-job $job | Tee-Object -Variable joutput
    $FullOuput += $joutput
    Remove-Job $job
}

Write-Output $FullOuput

If you were writing the above script for production use, you’d probably want to include a check to ensure you’ve provided as many backup locations as you have filegroups.

Tomorrow we’ll be looking at scheduling PowerShell Tasks.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Page 3 of 5

Powered by WordPress & Theme by Anders Norén