Yesterday we looked at finding SQL Server Backup files and then examining their contents. Today we’re going to look at how we can best record this information for later use in our scripts.

We’re going to create 2 functions:

  1. Scan a given folder structure for SQL Server Backup files, and record them in a global variable
  2. Read a given file and return the SQL Server Backups contained in it

Function 1 will be Get-SQLBackupFiles:

function Get-SQLBackupFiles
{param ([string]$FilePath, [string]$SQLServer)
    import-module "SQLPS" -DisableNameChecking
    $tmpout = @()
    $sqlsvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server($SQLServer)
    $restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore
    $devicetype = [Microsoft.SqlServer.Management.Smo.DeviceType]::File

    $files = Get-ChildItem $FilePath

    foreach ($file in $files){
        $restoredevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($file.FullName,$devicetype)
        $restore.Devices.add($restoredevice) | Out-Null
            $restore.ReadMediaHeader($sqlsvr) | Out-Null
        catch [System.Exception]{
             $errcnt =1
        finally {
            if ($errcnt -ne 1){
                $tmpout += $file.FullName
            $errcnt = 0
        $restore.Devices.remove($restoredevice) | out-null
        Remove-Variable restoredevice
    return $tmpout

This is based on the scriplet we looked at yesterday. We pass in the folder we want to search, and the SQL Server instance we want to run the query on. Remember that the SQL Server service on that server will need to have read access to the file are you’re scanning. The function returns an array of all the files that have the correct headers. Out-Null is appended to many of the lines to prevent the output from those commands coming through in the return output.

By having this as a discrete function, we could also make use of the Start-Job asynchronous method we looked at in Day 8 ( Asynchronous Backups – Part 2 – PowerShell Jobs) if we wanted to scan lots of files by splitting the areas into logical divisions and then scanning them in parallel across a number of SQL instances.

Our 2nd function is Get-BackupContents:

function Get-BackupContents
{param ([string]$FilePath,[string]$SQLServer)
    import-module "SQLPS" -DisableNameChecking

    $sqlsvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server($SQLServer)
    $restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore
    $devicetype = [Microsoft.SqlServer.Management.Smo.DeviceType]::File

    $restoredevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($FilePath,$devicetype)
    $restore.Devices.add($restoredevice) | Out-Null
    $Temp = $Restore.ReadBackupHeader($sqlsvr)
    $Temp | Add-Member -MemberType NoteProperty -Name FilePath -value $FilePath
    $restore.Devices.Remove($restoredevice) | Out-Null
    return $temp

Here we pass in a a filepath to the backup file, and a SQL Server instance to use to read the backup. We store the output in a temporary object, and also add the filepath to the object before returning the object

So we can call these like so:

$BackupFiles = @()
$Backups = @()

$BackupFiles += Get-SQLBackupFiles -FilePath c:\psbackups -SQLServer "Server1"
$BackupFiles += Get-SQLBackupFiles -FilePath c:\psbackups2 -SQLServer "Server2"

foreach($File in $BackupFiles){
    $Backups += Get-BackupContents -FilePath $Sile -SQLServer "Server1"

So we now have a PowerShell object $Backups which contains all of our backup information. This means we can now use PowerShell’s standard cmdlets to pull information back. For example, this code snippet:

$backups | Where-Object {($_.DatabaseName -eq "psdb1") -and ($_.BackupTypeDescription -eq "Database")} | Sort-Object $_.BackupStartDate -Descending | Format-List -Property DataBaseName, FilePath, BackupTypeDescription, BackupStartDate, BackupFinishDate

will find all of the Full Database backups of psdb1, and return them in descending date order, and then we can simply extracted the information we’re interested in

Tomorrow we’ll look at getting the same information from the original SQL Server that performed the backups and putting it into the same object. We’ll also cover writing this information back to a SQL Server, so you can reuse the information without having to scan the disks every time.

Then on Thursday (Day 16) we’ll look at how we can use this object to ensure a we restore backups in the correct order, and also allow us to start working towards being able to perfmon a Point In Time restore to any random point in time for which we have backups available.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.