Musings of a Data professional

Stuart Moore

Category: 31 days of SQL Server Backup and Restore with PowerShell Page 2 of 4

Day 21 of 31 Days of SQL Server Backup and Restore using PowerShell: Verfiying a restored database

So far we’ve just restored out databases, and assumed that the lack of an error from PowerShell means that all is good. Well, it can be, but it’s not guaranteed.

Paul S Randall provides a lot of examples of corrupted databases which will restore absolutely fine – http://www.sqlskills.com/blogs/paul/corruption-demo-databases-and-scripts/ – so just relying on a lack of error doesn’t mean your database is ready for use. If you want a corrupted database to run through these examples and are lucky enough to not have one to hand, these are ready made for you.

Also, a restored copy of your production database is the perfect place to do heavy DBCC checks. Getting a window to run them on production may be a nightmare, but you won’t have to negotiate if it’s a restored copy that noone else is using.

Today we’re just going to look at using SQL Server’s built in Database Consistency Check (DBCC), but there is nothing to stop you from rolling your own validation scripts which are aware of your business requirements as well as SQL Server’s internal needs.

To do this we’ll be using Invoke-SQLcmd, which lets us run SQL queries. Invoke-SQLcmd returns a PowerShell DataTable so we can take out quite complex results for later processing or saving.

At it’s simplest we can simply run DBCC against the database of our choice:

Import-Module sqlps -DisableNameChecking
$results = Invoke-SQLCmd -ServerInstance "Server1" -Query "DBCC CHECKDB (database1) WITH TABLERESULTS"

Note that we use WITH TABLERESULTS, this ensures the output from DBCC goes smoothly into our $results table. As it stands this will return 80 rows whether the Database has issues or not as DBCC will return all of the informational messages as well.

If you want to keep the informational messages but don’t want act upon them, then you can filter them by averaging the Status column:

if (($results | Measure-Object -property status -Average).Average -eq 0){
    # We have only information messages, do nothing
}else{
    #We have error records, do something.
}

A status of 0 indicates no problem, so an average of 0 means that every value is 0.

You can also suppress the informational messages at the DBCC level by appending NO_INFOMSGS to your query:

Import-Module sqlps -DisableNameChecking
$results = Invoke-SQLCmd -ServerInstance "Server1" -Query "DBCC CHECKDB (database1) WITH TABLERESULTS, NO_INFOMSGS"

Which means you’ll only get back rows for errors, in this case, you can simply check the size/length of your results:

if ($results.length -eq 0){
    # We have only information messages, do nothing
}else{
    #We have error records, do something.
}

There is also a 3rd option of return value. It is possible for the database to be so corrupted that DBCC does not complete, or even begin in some cases! Because of this, we need to wrap the Invoke-SQLCmd in a Try, Catch block so we catch this as well:

try{
    $tmp2 = Invoke-Sqlcmd -Query "DBCC CHECKDB (fatal) with TABLERESULTS, NO_INFOMSGS" -ServerInstance "Server1" -ErrorAction Stop
}
catch{ [System.Exception]
    $errorout += "major error, big problem"
    $errorout += $error[0]
}

Note that we need add -ErrorAction Stop to our call to Invoke-SQLCmd, this is because in PowerShell terminology Invoke-SQLCmd is Non-Terminating, that is it won’t throw an error that stops execution or bet caught by Try,Catch. By adding -ErrorAction Stop we tell PowerShell that we want any errors raised to be Terminators, which means we can catch them. It is also possible to set this at a script level, but that can cause some unexpected things to bubble up, so only do so if you know what to expect.

There are a number of ways you can handle the errors. At the simplest level you can notify yourself or other operators that an error has been found, and then investigate manually. PowerShell provides a cmdlet Send-MailMessage which you can use to pass the message on:

Push-Location
import-module sqlps -DisableNameChecking
Pop-Location

$results = @()
$Database = "fatal"
$SQLServer = "Server1"
$errorout = ""

try{
    $results = Invoke-Sqlcmd -Query "DBCC CHECKDB ($Database) with TABLERESULTS, NO_INFOMSGS" -ServerInstance $SQLServer -ErrorAction Stop
}
catch{ [System.Exception]
    $errorout = $_
}
if (($results.length -eq 0) -and ($errorout -ne "")){
    Send-MailMessage -SmtpServer smtphost.contoso.com -from sql@ncontoso.com -to stuart.moore@contoso.com  -body $errorout -Subject "Fatal DBCC error for $Database"
}else{
   $results | export-clixml c:\temp\dbcc.xml
   Send-MailMessage -SmtpServer smtphost.contoso.com -from sql@ncontoso.com -to stuart.moore@contoso.com -body "See Attachment for output" -Subject "DBCC errors for $Database" -Attachments c:\temp\dbcc.xml
   Remove-Item c:\temp\dbcc.xm
}

In this case I’m dropping the output into XML before sending it as an attachment, but PowerShell provided cmdlets for converting to text, CSV and other formats as well.

The results can also be written back to a database for audit and reporting purposes. You can also write the information from multiple restore servers back to a central repository. Unfortunately this has become slightly more complicated since the release of SQL Server 2012 as Microsoft have changed the columns that various DBCC commands retun. There are a number of options to cope with this:

  • Different tables for SQL Server 2012, and pre 2012 SQL Server results
  • Storing the data as XML in the database
  • Only storing certain information (ie; ignore the bits that don’t fit)

Each of these presents benefits and drawbacks. Here’s a quick example of how to log it to a table. This based on have a table like this (this is the pre SQL Server 2012 version):

CREATE TABLE pre2012DBCCResults
(
[ServerName] VARCHAR(200) NULL,
[DatabaseName] VARCHAR(200) NULL,
[DateOfRun] DATETIME NULL,
[Error] INT NULL,
[Level] INT NULL,
[State] INT NULL,
[MessageText] VARCHAR(7000) NULL,
[RepairLevel] varchar(200) NULL,
[Status] INT NULL,
[DbId] INT NULL,
[ObjectId] INT NULL,
[IndexId] INT NULL,
[PartitionId] INT NULL,
[AllocUnitId] INT NULL,
[File] INT NULL,
[Page] INT NULL,
[Slot] INT NULL,
[RefFile] INT NULL,
[RefPage] INT NULL,
[RefSlot] INT NULL,
[Allocation] INT NULL,
)

And then to populate that table we add the following to our Verification script:

Push-Location
import-module sqlps -DisableNameChecking
Pop-Location

$results=@()
$Database = "semifatal"
$SQLServer = "ntuclstr-qc\quality"
$errorout = ""

try{
    $results = Invoke-Sqlcmd -Query "DBCC CHECKDB ($Database) with TABLERESULTS, NO_INFOMSGS" -ServerInstance $SQLServer -ErrorAction Stop
}
catch{ [System.Exception]
    $errorout = $_
}
if (($results.length -eq 0) -and ($errorout -ne "")){
    Send-MailMessage -SmtpServer smtphost.contoso.com -from sql@ncontoso.com -to stuart.moore@contoso.com  -body $errorout -Subject "Fatal DBCC error for $Database"
}else{
    $results | Add-Member -MemberType NoteProperty -name ServerName -value $SQLServer
    $results | Add-Member -MemberType NoteProperty -name DatabaseName -value $Database
    $results | Add-Member -MemberType NoteProperty -name DateOfRun -value (get-date) -TypeName datetime

    $ResultsServer = "Server2"
    $ResultsDatabase = "RestoreChecks"
    $ResultsTable = "pre2012DBCCResults"
    $RestoreSQL = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server($ResultsServer)
    if ($RestoreSQL.VersionMajor -lt 12){
        foreach ($row in $results){
        $row.MessageText = $row.MessageText -replace ("'","''")
        $str = "insert into "+$ResultsTable"+ ([ServerName], [DatabaseName],[DateOfRun],[Error],[Level],[State],[MessageText],[RepairLevel],[Status],[DbId],[ObjectId],[IndexId],[PartitionId],[AllocUnitId],[File],[Page],[Slot],[RefFile],[RefPage],[RefSlot],[Allocation]) values ('"+$row.ServerName+"','"+$row.DatabaseName+"','"+$row.DateOfRun+"','"+$row.Error+"','"+$row.Level+"','"+$row.State+"','"+$row.MessageText+"','"+$row.RepairLevel+"','"+$row.Status+"','"+$row.DbId+"','"+$row.Id+"','"+$row.IndId+"','"+$row.PartitionId+"','"+$row.AllocUnitId+"','"+$row.File+"','"+$row.Page+"','"+$row.Slot+"','"+$row.RefFile+"','"+$row.RefPage+"','"+$row.RefSlot+"','"+$row.Allocation+"')"
        Invoke-Sqlcmd -query $str -ServerInstance $ResultsServer -Database $ResultsDatabase
        }
        Send-MailMessage -SmtpServer smtphost.contoso.com -from sql@ncontoso.com -to stuart.moore@contoso.com -body "Errors records in Table: $Resultstable in $ResultsDatabase on $ResultsServer" -Subject "DBCC errors for $Database" -
    }else{
       #Work with the SQL Server 2012 code
    }

}

Again, in this example we’re only storing the results if there’s an error. But it would be simple enough to log all results if you wanted to by removing NO_INFOMSGS and moving the db write outside of the IF section.

If you wanted to run your own custom scripts, then you can just run them with Invoke-SQLCmd and handle the results however you want.

Tomorrow we’ll take a look at the performance differences between running your backups and restores via PowerShell or via T-SQL

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Day 20 of 31 Days of SQL Server Backup and Restore using PowerShell: Restore to Random Point in Time

Yesterday we looked at restoring our SQL Server database to a point in time, but we had to tell PowerShell and SQL Server when that point should be. As the overall idea of this series is to show you how you can automate your restores, having to manually enter a point in time isn’t really going to cut it. So today we’ll look at how you could pick a point in time from the period covered by your backups. We’ll also look at cutting some of yesterdays code into reusable functions (Remember the PowerShell mantra “Write functions, not scripts”) that we can add to our existing functions.

First we’ll create a funtion to return the subset of backup files we want. To save duplication, we’ll make it work for both point in time restores, and for returning the files needed to restore to the most recent state. We’ll pass in our collection of backup files, a SQL Server name (this being the server name that the backup was performed against), a database name, and an optional point in time.

function Get-BackupsByDate
{param($BackupFiles, [string]$SQLServer, [string]$DatabaseName, [datetime]$PointInTime=(Get-Date).AddDays(1))

    $filter = get-date($PointInTime2) -format "dd MMMM yyyy HH:mm:ss"
    $RestoreFiles = @()
    $RestoreFiles += $BackupFiles | Where-Object {($_.BackupTypeDescription -eq "Database") -and ($_.BackupStartDate -lt $filter) -and ($_.DatabaseName -eq $DatabaseName) -and ($_.ServerName -eq $ServerName)} | Sort-Object LastLSN -Descending | Select-Object -First 1
    $RestoreFiles += $BackupFiles | Where-Object {($_.BackupTypeDescription -eq "Database Differential") -and ($_.BackupStartDate -lt $filter) -and ($_.DatabaseName -eq $DatabaseName) -and ($_.ServerName -eq $ServerName)} | Sort-Object LastLSN -Descending | Select-Object -First 1
    $tmpLSN = $RestoreFiles | Measure-Object -Property LastLSN -Maximum
    $RestoreFiles += $BackupFiles | Where-Object {($_.LastLSN -ge $tmpLSN.Maximum) -and ($_.BackupTypeDescription -eq "Transaction Log") -and ($_.BackupStartDate -lt $filter) -and ($_.DatabaseName -eq $DatabaseName) -and ($_.ServerName -eq $ServerName)}
    $RestoreFiles = $RestoreFiles | Sort-Object -Property LastLSN
    return $RestoreFiles
}

If we don’t get a point on time, we assume that the user wants to restore the to the most current state, so we set a time 1 day in the future and pick the files in the normal method. We also sort the array within the function.

We can then also convert the restore portion of our script into a function that takes the array of files to restore, and a SQLServer to restore them on to:

function Restore-SQLDatabasePointInTime
{param($BackupFiles, [string]$SQLServer, [datetime]$PointInTime=(Get-Date).AddDays(1))

    $restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore
    $restore.ToPointInTime = get-date($PointInTime) -format "MMM dd, yyyy hh:mm tt"

    $devicetype = [Microsoft.SqlServer.Management.Smo.DeviceType]::File
    foreach ($backup in $BackupFiles ){
        $restoredevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($backup.filepath,$devicetype)

        $restore.devices.add($restoredevice)
        $restore.FileNumber = $backup.position
        $restore.Database = $backup.DatabaseName
        $restore.SqlRestore($RestoreServer)

        if ($backup.LastLSN -eq $BackupFiles[-1].LastLSN){
            $query = "Restore database "+$restore.Database.tostring()+" with recovery"
            Invoke-Sqlcmd -ServerInstance $RestoreServer -query $query
        }
        $restore.Devices.Remove($restoredevice)
    }

}

We use the same method to cope with a missing $PointInTime parameter. This will work fine if you pass in a filtered list of backups, it will just restore to the END of the last transaction log rather than stopping at any point in time.

So now we can cut down yesterday’s script to:

$backupfiles = @()
$backups = @()
$restoreFiles = @()
$BackupServer = "Server1"
$RestoreServer = "RestoreServer1"
$DatabaseName = "PointInTime"

$backupfiles += Get-SQLBackupFiles -FilePath c:\piit\ -SQLServer "RestoreServer"

foreach($file in $BackupFiles | Sort-Object -property LastLSN){
    $backups +=Get-BackupContents -FilePath $file -SQLServer "RestoreServer"
}

$PointInTime = get-date("16:59 20/09/2013")

$RestoreFiles = Get-BackupsByDate -BackupFiles $Backups -SQLServer $BackupServer -DatabaseName $DatabaseName -PointInTime $PointInTime

Restore-SQLDatabasePointInTime -BackupFiles $RestoreFiles -SQLServer $RestoreServer -PointInTime $PointInTime

Which is a lot cleaner. But we still have a manually entered Point In time, so how do we get rid of that?

Well, we’ll add another function that filters the backups down to all the backups for a specific database and Servername (just in case you’ve got multiple copies of ‘Accounting’ on different servers). Then we can find the maximum and minimum dates covered by those backups, pick a random number in the middle and convert it to a suitable date:

function Get-DatabasePointInTime
{param($Backupfiles, [string]$SQLServer, [string]$DatabaseName)

    $BackupFiles = $Backupfiles | Where-Object {($_.DatabaseName -eq $DatabaseName) -and ($_.ServerName -eq $ServerName)}
    $MinDate = ($Backupfiles | Measure-Object -Property BackupStartDate -Minimum).Minimum
    $MaxDate = ($Backupfiles | Measure-Object -Property BackupStartDate -Maximum).Maximum
    $seconds =  Get-Random ($MaxDate - $MinDate).totalseconds
    $PointInTime = $MinDate.AddSeconds($seconds)
    return $PointInTime
}

which reduces our script to:

$backupfiles = @()
$backups = @()
$restoreFiles = @()
$BackupServer = "Server1"
$RestoreServer = "RestoreServer1"
$DatabaseName = "PointInTime"

$backupfiles += Get-SQLBackupFiles -FilePath c:\piit\ -SQLServer $RestoreServer

foreach($file in $BackupFiles | Sort-Object -property LastLSN){
    $backups +=Get-BackupContents -FilePath $file -SQLServer $RestoreServer
}

$PointInTime = Get-DatabasePointInTime -BackupFiles $backups -SQLServer $RestoreServer -DatabaseName $DatabaseName

$RestoreFiles = Get-BackupsByDate -BackupFiles $Backups -SQLServer $BackupServer -DatabaseName $DatabaseName -PointInTime $PointInTime

Restore-SQLDatabasePointInTime -BackupFiles $RestoreFiles -SQLServer $RestoreServer -PointInTime $PointInTime

By changing the start of the script we can allow the script to take in parameters:


if ($args.Length -lt 3){
     $x =  @"
"
     Wrong usage. Should be : .\PITRestore.ps1 `"`" `"`" `"`"
      - Name of the SQL Server doing the restore
      - Name of the SQL Server where the backup was performed
      - Database to Restore
      - Path to backup files
"@
     write-host $x
}
$RestoreServer = $args[0]
$BackupServer = $args[1]
$DatabaseName = $args[2]
$FilePath = $args[3]

$backupfiles = @()
$backups = @()
$restoreFiles = @()

$backupfiles += Get-SQLBackupFiles -FilePath $filepath -SQLServer $RestoreServer

foreach($file in $BackupFiles | Sort-Object -property LastLSN){
    $backups +=Get-BackupContents -FilePath $file -SQLServer $RestorServer
}

$PointInTime = Get-DatabasePointInTime -BackupFiles $backups -SQLServer $RestoreServer -DatabaseName $DatabaseName

$RestoreFiles = Get-BackupsByDate -BackupFiles $Backups -SQLServer $BackupServer -DatabaseName $DatabaseName -PointInTime $PointInTime

Restore-SQLDatabasePointInTime -BackupFiles $RestoreFiles -SQLServer $RestoreServer -PointInTime $PointInTime

And we can now call that as:

.\PointInTimeRestore.ps1 "Server1" "PointInTime" "c:\pitt\"

Which is simple to schedule from Tack Scheduler (See Day 10).

So you can now easily schedule a restore of a database to a random point in time. This leads on to the next question of how can we verify that that backup is sound? We’ll look at how we can automate verification of the new database with PowerShell tomorrow.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Day 19 of 31 Days of SQL Server Backup and Restore using PowerShell: Point In Time Restores

Over the last couple of posts we’ve covered increasingly complex restore scenarios, but we’ve always restored to the most recent point we can. Today we’re going to investigate restoring to a point somewhere else in the backup chain.

To do this, we may have to find:

  • The most recent Full database backup taken prior to the point we want to restore to
  • The most recent differential backup prior to the restore point
  • All of the transaction log backups from the differential up to and including the one covering the point we want to restore to

It is possible that we may only need 1 or more of those. It may be that our Point in Time (PiT) requirement can be purely covered by our full backup, or we may not need the differential as we can reach our PiT with just the full backup and transaction logs.

To give ourselves enough data to work with, we’ll create a simple database:

create database PointInTime;
go

use PointInTime;
go

create table step
(
StepID int identity(1,1),
StepTime datetime
)
go

And then we’ll simulate 7 “days” worth of database usage:

import-module "SQLPS" -DisableNameChecking

$Server = "WIN-C0BP65U3D4G"

$minutes = 0
$hours = 0
$days = 0

Backup-SqlDatabase -ServerInstance $Server -Database PointInTime -BackupAction Database -BackupFile "c:\piit\chain-start.bak"

While ($days -lt 8){
    $hours = 0
    while ($hours -lt 24){
        $minutes = 0
        while ($minutes -lt 60){
            Invoke-Sqlcmd -ServerInstance $Server -query "insert into PointInTime.dbo.Step (steptime) values (current_timestamp)"
            Start-Sleep -Milliseconds 500
            $minutes++
        }
        if ($hours -eq 18){
            if ($days%3 -eq 0){
                #full backup
                $backupfile = "c:\piit\PointInTime_"+$days+"_"+$hours+".bak"
                Backup-SqlDatabase -ServerInstance $Server -Database PointInTime -BackupAction Database -BackupFile $backupfile
            }else{
                #diff backup
                $backupfile = "c:\piit\PointInTime_.bcf"
                Backup-SqlDatabase -ServerInstance $Server -Database PointInTime -BackupAction Database -BackupFile $backupfile -Incremental
            }
        }else{
            #log backup
             $backupfile = "c:\piit\PointInTime_"+$days+"_"+$hours+".trn"
             Backup-SqlDatabase -ServerInstance $Server -Database PointInTime -BackupAction Log -BackupFile $backupfile
        }
        $hours++
    }
    $days++
}

Here we’re simulating 1 “week” of the life in a database, with the following occasions:

  • Every ‘minute’ we insert a row into the steps table
  • Every ‘hour’ we take a transaction log backup
  • At ’18:00′ on every 3rd ‘day’ we take a full backup
  • At ’18:00′ on days where we don’t take a full backup we take a differential backup

The script should take about 90 minutes to complete, and give you 162 backup files. And just to be awkward, and so we can reprove our previous scripts, I’ve put all the differential backups into a single file.

My version of this database runs from 16:13 20/09/2013 till 17:36, so I need to pick a PiT between those boundaries. Again we’re going to assume we’ve saved our file reading functions into a script file, so we can just . include them here:

$backupfiles = @()
$backups = @()

$backupfiles += Get-SQLBackupFiles -FilePath c:\piit\ -SQLServer "RestoreServer"

foreach($file in $BackupFiles | Sort-Object -property LastLSN){
    $backups +=Get-BackupContents -FilePath $file -SQLServer "RestoreServer"
}

$PointInTime = get-date("16:59 20/09/2013")
$filter = get-date($PointInTime) -format "dd MMMM yyyy HH:mm:ss"
$PointInTimeSQL = get-date($PointInTime) -format "MMM dd, yyyy hh:mm tt"
$RestoreFiles = @()

$RestoreFiles += $Backups | Where-Object {($_.BackupTypeDescription -eq "Database") -and ($_.BackupStartDate -lt $filter)} | Sort-Object LastLSN -Descending | Select-Object -First 1
$RestoreFiles += $Backups | Where-Object {($_.BackupTypeDescription -eq "Database Differential") -and ($_.BackupStartDate -lt $filter)} | Sort-Object LastLSN -Descending | Select-Object -First 1
$tmpLSN = $RestoreFiles | Measure-Object -Property LastLSN -Maximum
$RestoreFiles += $Backups | Where-Object {($_.LastLSN -ge $tmpLSN.Maximum) -and ($_.BackupTypeDescription -eq "Transaction Log") -and ($_.BackupStartDate -lt $filter)}

$RestoreServer = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server("RestoreServer")
$devicetype = [Microsoft.SqlServer.Management.Smo.DeviceType]::File

$RestoreFiles = $RestoreFiles | sort-object -property LastLSN

foreach ($backup in $RestoreFiles ){
    $restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore
    $restoredevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($backup.filepath,$devicetype)
    $restore.devices.add($restoredevice)
    $restore.FileNumber = $backup.position
    $restore.ToPointInTime = $PointInTimeSQL
    $restore.Database = $backup.DatabaseName
         $restore.SqlRestore($RestoreServer)

    $backup | fl filepath, lastlsn, backuptypedescription, backupstartdate
    if ($backup.LastLSN -eq $RestoreFiles[-1].LastLSN){
        $query = "Restore database "+$restore.Database.tostring()+" with recovery"
        Invoke-Sqlcmd -ServerInstance $RestoreServer -query $query
    }
    $restore.Devices.Remove($restoredevice)
}

The first few lines are the familiar ones we’ve been using over the last couple of posts.

We then set some DateTime variables. We need a couple of different formats to match the formats in use later in the script, and I find it easier to keep the formatting seperate to make the later lines more readable.

We then build up a $RestoreFiles array like before. But this time the filtering is different. Now we don’t want the latest Full Backup, we want the latest full backup previous to our Point in Time. And then we want the newest differential in the gap between our Full backup and our Point in Time. Finally, we find all the transaction log backups between our differential and out Point in Time, remembering that we also need the log back that goes past the Point in Time! It can be easy to get the filtering wrong and stop at the log backup before the correct last one.

We sort $RestoreFiles to by LastLSN to ensure that we get the backup files in the correct order. We then loop through every backup in $RestoreFiles, building up a Restore object and restoring it. After each restore we compare the current backup’s LastLSN value with that of the final element in $RestoreFles (the index position -1 in an array will return the last object in the array), if they are the same then we have restored our last file, and now need to recover our database. We do this by using Invoke-SQLCmd to run a RESTORE DATABSE [dbname] WITH RECOVERY.

And if you now run select max(steptime) from PointInTime.dbo.Step you’ll find that you have a recovered database and that the returned value should be within 30 seconds of your chosen point in time.

Tomorrow we’ll look at cutting this up into reusable functions, and also adding some code to enable  our scripts to determine a random point in time covered by our backups, and restore to then for testing.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Day 18 of 31 Days of SQL Server Backup and Restore using PowerShell: Relocating files during Restore

So far during the restores portion of this series we’ve been assuming that we can restore our database’s file back to the same locations they were on the original server. Often this is not the case, even on a dedicated restore server:

  • there’s the possibility that the files already exist
  • a good chance that the restore server’s filesystems are laid out differently
  • you may want to be able to restore the same database to multiple points in time
  • or many other reasons

So today we’re going to look at how your can quickly relocate every file of a restored database without knowing anything about them prior to the restore.

To do this we’re going to use another method of the SMO Restore object, ReadFileList. This works in the same manner as the T-SQL RESTORE FILELISTONLY. When run against a backup file it will list all the database files contained within it. Under PowerShell this will return a DataTable object which we can loop through to identify every file, We can use this information, and details we want to provide to build a RelocateFile object, which we can then pass into the RelocateFiles method of Restore

For all the examples in todays post we’ll assume that the following PowerShell has been run to build the basic Restore environment:

Import-Module sqlps -DisableNameChecking
."c:\PSScripts\SQLBackupFunctions.ps1"

$backupfiles = @()
$backups = @()

$backupfiles += Get-SQLBackupFiles -FilePath c:\DiffRestore -SQLServer "Server1"

foreach($file in $BackupFiles){
    $backups +=Get-BackupContents -FilePath $file -SQLServer "Server1"
}

$RestoreFiles += $Backups | Where-Object {$_.BackupTypeDescription -eq "Database"} | Sort-Object LastLSN | Select-Object -First 1
$RestoreFiles += $Backups | Where-Object {$_.BackupTypeDescription -eq "Database Differential"} | Sort-Object LastLSN | Select-Object -First 1
$tmpLSN = $RestoreFiles | Measure-Object -Property LastLSN -Maximum
$RestoreFiles += $Backups | Where-Object {($_.LastLSN -ge $tmpLSNMaximum) -and ($_.BackupTypeDescription -eq "Transaction Log")}

 $restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore
$devicetype = [Microsoft.SqlServer.Management.Smo.DeviceType]::File

$HasThereBeenAnError = 0

As a basic example we’ll relocate all the database files in a backup file to c:\restorefiles\:

$restoredevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($RestoreFiles[0].FilePath,$devicetype)
$DBFiles = $restore.ReadFileList($SQLsvr)
if (($DBFiles | Measure-Object -Property IsPresent -average).Average -eq 1){
    foreach($File in $DBFiles){
	    $rf = new-object -typename Microsoft.SqlServer.Management.Smo.RelocateFile
	    $rf.LogicalFileName = $file.LogicalName
	    $rf.PhysicalFileName = $Path+(Split-Path $File.PhysicalName -Leaf)
	    $Restore.RelocateFiles.Add($rf)
	    Remove-Variable rf
    }
}else{
    Write-Output "Does Not contain all db files"
    break;
}

$Restore.sqlrestore($SQLSvr)

}

We use ReadFileList to read the contents of our backup file into $DBFiles . We then use this line to make sure the backup contains all the db files:

if (($RestoreFiles | Measure-Object -Property IsPresent -average).Average -eq 1){

This makes use of the fact that PowerShell treats $True as 1 and $False as 0 when you run a mathematical function across them. If all the files are present then the average should be 1, if any files are not present (IsPresent=False) then the average will be less than 1. This will happen if you pick a file that contains a File or FileGroup Backup. We’ll cover performing File and FileGroup restores in a later article, but will pretend they don’t exist for this one!

Assuming we’ve got a backup with all the files, we then loop through the data table. For each file we create a new RelocateFile object. We populate the LogicalFileName property directly and then build up a new path for the PhysicalFileName. We use the Split-Path cmdlet with it’s -leaf switch to split the filename and extension from the rest of the path (ie; Split-Path "c:\test\file.txt" -path would return file.txt). We then add the RelocateFile object to the RelocateFiles method of our Restore object.

The ReadFileList method can also be used check the size of the resulting restore. When SQL Server backs up a database it only writes out the data pages ignoring empty space in the data files. So a database that has 250GB allocated on disk, but only contain 120GB of data will produce a smaller backup than another database which 250GB allocated on disk, but only contain 240GB of data. However, when SQL Server restores a database it restores the database files to the size they were when the backup was taken, complete with empty space. So in the first case, despite only having 120GB of data in your database, you’re going to have to have 250GB of spare disk space to restore the database. And despite all the pleas on dba.stackexchange.com there’s no way around that. The other problem is that as you roll forward transaction backups the required amound of space on disk can change, so just because you can restore the full backup it doesn’t follow that you can restore the rest of the chain.

By running ReadFileList across all the files to be restored we can calculate the maximum filesize for each file in the database, and place them on filesystems with enough space or decided not to progress with a restore that’s doomed to failure. We can do this as follows:

$DBfiles = @{}
$DBfileSize = @()
foreach ($file in $RestoreFiles){
    $sqlsvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server($SQLServer)
    $restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore
    $devicetype = [Microsoft.SqlServer.Management.Smo.DeviceType]::File

    $restoredevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($file.FilePath,$devicetype)
    $restore.Devices.add($restoredevice)
    $dbfiles = $Restore.ReadFileList($sqlsvr)
    foreach ($tmpfile in $DBfiles){
        if (($dbfilesize.logicalName -Contains $tmpfile.logicalname) -eq $False){
            $tmpobj = New-Object System.Object
            $tmpobj | Add-Member -type NoteProperty -name LogicalName -value $tmpfile.LogicalName
            $tmpobj | Add-Member -type NoteProperty -name PhysicalName -value $tmpfile.PhysicalName
            $tmpobj | Add-Member -type NoteProperty -name Size -value $tmpfile.Size
            $DBFileSize += $tmpobj

        }
        if ($filet.size -gt ($dbfilesize | Where-Object {$_.LogicalName -eq $filet.LogicalName}).size){
            ($dbfilesize | Where-Object {$_.LogicalName -eq $filet.LogicalName}).size = $filet.size
        }

    }
    $restore.Devices.Remove($restoredevice)
}

$RestoreLocations = @("c:\restores\","d:\restores\","e:\restores\")
$LocationCount = 0
$MarginForError = 200*1024*1024
$FSO = New-Object -Com Scripting.FileSystemObject
$RelocateFile = @()
$CurrentlyNeeded = 0

$i = 0
while ($i -lt $RestoreLocations.Count){
    $CurrentlyNeeded[$i]=0
    $i++
}
$i=0

while ($i -lt $DBfileSize.count){
    if ($DBFileSize[$i].size + $MarginForError + $CurrentlyNeeded[$LocationCount] -gt $FSO.getdrive($(Split-Path $RestoreLocations[$LocationCount] -Qualifier)).AvailableSpace){
        $LocationCount++
        if($LocationCount = $RestoreLocations.Count){
            write-output "No Space anywhere for this file!"
            break;
        }
    }else{
        $rf = new-object -typename Microsoft.SqlServer.Management.Smo.RelocateFile
        $rf.LogicalFileName = $DBFileSize[$i].LogicalName
        $rf.PhysicalFileName = $RestoreLocations[$LocationCount]+(Split-Path $DBFileSize[$i].PhysicalName -Leaf)
        $RelocateFile += $rf
        Remove-Variable rf
        $CurrentlyNeeded[$LocationCount] += $DBfileSize[$i].size
        $i++
        $LocationCount = 0
    }
}

$RestoreServer = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server("RestoreServer")
$devicetype = [Microsoft.SqlServer.Management.Smo.DeviceType]::File
foreach ($backup in $RestoreFiles | sort-object LastLSN){
    $restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore
    $restoredevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($backup.filepath,$devicetype)
    $restore.devices.add($restoredevice)
    $restore.FileNumber = $backup.position
    $restore.Database = $backup.DatabaseName
    foreach ($f in $RelocateFile){
        $restore.RelocateFiles.add($f)
    }
    if ($backup.LastLSN -eq $backups[-1].LastLSN){
        $restore.NoRecovery = $False
    }else{
        $restore.NoRecovery = $True
    }
    $restore.SqlRestore($RestoreServer)

    $restore.Devices.Remove($restoredevice)
}

This snippet could potentially take a while to run as it has to read the filelist from every backup file passed into it, so if you’ve got 100 transaction logs to process after your Full Backup be prepared for a bit of a wait!

Quite a lot going on in here. We start off by declaring a hash table $DBfiles and an array $DBFileSize. Next we loop through our list of backup files in $RestoreFiles. For each one we call ReadFileList and put the restults into $dbfiles. We then loop through each file returned. If we’ve not seen the particular file before, we create an object holding it’s LogicalName, PhysicalName and Size. This Object is then added to the $DBFileSize array. If we have seen the file before then we compare the size returned by ReadFileList to the one in $DBFileSize, if the new value is larger we replace the value in $DBFileSize. And then we go back around the loop.

Once we’ve been through every file, we move on to actually checking our free space. We initialise a couple of variables first:

  • $RestoreLocations – An array holding all the allowed locations (1 per drive or we’ll get confused later please!)
  • $LocationCount – A counter to keep track of which Location from the above we’re currently looking at
  • $MarginForError – Just to leave some room for anything unexpected happening between checking and restoring, in the example this is set to 200MB
  • $FSO – FileScripting Object to use to get free drive space
  • $RelocateFile – An array which will hold out relocation information for pushing into our restore
  • $CurrentlyNeeded – A hash table we’ll programatically populate next. This is going to be used to keep track of the space our restore is going to use on each Location

Next we populate the hashtable $CurrentlyNeeded with an entry of 0 for every entry in our $RestoreLocations array. The we reset our counter $i to 0 for the next section.

We now loop through $Filesize. We’re using a While loop here as we want to loop through every possible location for every file in the array till we’ve either exhausted the possibilities, or we’ve found somewhere for it to go.

The first thing we do in the loop is to check to see if the size of the current file plus the amount we’re currently looking at using on the current drive plus our MarginForError is less than the amount of free space on the drive containing the folder. If there isn’t we increment our $LocationCount by 1, and go round the loop again with the same database file. If we increament $LocationCount beyond the end of our $RestoreLocations array we write out an error message and stop looping as we don’t have anywhere with enough free space to restore this file.

If on one of the loops we can find some space, then we build a new RelocateFile object ($rf) and populate with our file details. We add it to our array $RelocateFile, add the size of the restored file to the space used on the drive we’re going to put it on, and then increment $i so we move onto the next file, and reset $LocationCount to 0 so we start from the beginning of our locations again.

Next we use the same restore patten as we used in yesterday’s post, with the addition of:

    foreach ($f in $RelocateFile){
        $restore.RelocateFiles.add($f)
    }

to pass our newly created array of file relocations into the restore object one by one. Then all being well, we restore our database to it’s new home!

Today’s is a long post, but the core concepts are in the first half. The longer section afterwards is to give you some hints about how you can use this functionality to really streamline automating your backup.

Tomorrow, we’re going to start looking at restore to point in the database’s life other than the latest backup.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Day 17 of 31 Days of SQL Server Backup and Restore using PowerShell: More complex restore scenarios with Differentials

Building up yesterdays technique for ensuring we restore the correct backup files in the the correct order, today we’re going to look at what happens when you bring differentials into the mix, and maybe have more than one day’s backups for your target database.

Like yesterday, we’ll build a database to use as a demonstration:

create database DiffRestore;
go

use DiffRestore
go

create table steps(
StepID int identity(1,1),
StepTime datetime
)
go
backup database DiffRestore to disk='c:\DiffRestore\DiffRestore.bak'
go
insert into steps (steptime) values (current_timestamp)
go
backup log DiffRestore to disk='c:\DiffRestore\DiffRestore.trn'
go
insert into steps (steptime) values (current_timestamp)
go
backup database DiffRestore to disk='c:\DiffRestore\DiffRestore.bak'
go
insert into steps (steptime) values (current_timestamp)
go
backup log DiffRestore to disk='c:\DiffRestore\DiffRestore2.trn'
go
insert into steps (steptime) values (current_timestamp)
go
backup database DiffRestore to disk='c:\DiffRestore\DiffRestore.bak' with differential
go
insert into steps (steptime) values (current_timestamp)
go
backup log DiffRestore to disk='c:\DiffRestore\DiffRestore.trn'
go
insert into steps (steptime) values (current_timestamp)
go
backup database DiffRestore to disk='c:\DiffRestore\DiffRestore.bak' with differential
go
insert into steps (steptime) values (current_timestamp)
go
backup log DiffRestore to disk='c:\DiffRestore\DiffRestore.trn'
go

So we have a simple database, but with a more complicated backup structure. we have in this order:

  1. Full Backup
  2. Transaction Log Backup
  3. Full Backup
  4. Transaction Log Backup
  5. Differential Backup
  6. Transaction Log Backup
  7. Differential Backup
  8. Transaction Log Backup

So to restore this database to it’s latest state we need to restore Backup 4, then Backup 7 (Differential backups are cumulative, so we only need to restore the latest one) and then any following transaction log backups. In this case that’s just Backup 7.

In this case we’ll extract the backups we require, and build a new object to loop through for the actual restore: We’ll assume that the functions we created on Day 14 and Day 15 have been stored in the file SQLBackupFunctions.ps1

Import-Module sqlps -DisableNameChecking
."c:\PSScripts\SQLBackupFunctions.ps1"

$backupfiles = @()
$backups = @()

$backupfiles += Get-SQLBackupFiles -FilePath c:\DiffRestore -SQLServer "Server1"

foreach($file in $BackupFiles){
    $backups +=Get-BackupContents -FilePath $file -SQLServer "Server1"
}

$RestoreFiles = @()

$RestoreFiles += $Backups | Where-Object {$_.BackupTypeDescription -eq "Database"} | Sort-Object LastLSN | Select-Object -First 1
$RestoreFiles += $Backups | Where-Object {$_.BackupTypeDescription -eq "Database Differential"} | Sort-Object LastLSN | Select-Object -First 1
$tmpLSN = $RestoreFiles | Measure-Object -Property LastLSN -Maximum
$RestoreFiles += $Backups | Where-Object {($_.LastLSN -ge $tmpLSNMaximum) -and ($_.BackupTypeDescription -eq "Transaction Log")}

$RestoreServer = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server("RestoreServer")
$devicetype = [Microsoft.SqlServer.Management.Smo.DeviceType]::File
foreach ($backup in $RestoreFiles | sort-object LastLSN){
    $restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore
    $restoredevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($backup.filepath,$devicetype)
    $restore.devices.add($restoredevice)
    $restore.FileNumber = $backup.position
    $restore.Database = $backup.DatabaseName
    if ($backup.LastLSN -eq $backups[-1].LastLSN){
        $restore.NoRecovery = $False
    }else{
        $restore.NoRecovery = $True
    }
    $restore.SqlRestore($RestoreServer)

    $restore.Devices.Remove($restoredevice)
}

We do our normal scan of the filesystem to find out backup files and read their headers into our $Backups array. Then, as we know we want to restore to the latest point we can pick the Full backup and the Differential Backup with the highest LSN. We then get all Transaction Log backups with a greater LastLSN than the Differential Backup. All of these are added to an array $RestoreFiles, which we then step through restoring each file populating the required values from our array.

So far we’ve only looked at restoring onto servers where we’re assuming that our database doesn’t already exist. Tomorrow we’ll look at we can easily redirect all the files in a restored database if needed

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Day 16 of 31 Days of SQL Server Backup and Restore using PowerShell: A more robust transaction restore

Over the last 2 days we’ve looked at getting backup information from 2 sources:

and loading it into a PowerShell object. Today we’re going to put that object to use to show how we can use it to make sure we’re restoring files in the right order without having to rely on filenames or modified dates.

To setup we need a database and some backups. And we’ll make those backups a little complicated just to show off everything.

create database RobustRestore;
go

use robustrestore
go

create table steps(
StepID int id(1,1),
StepTime datetime
)
go
backup database RobustRestore to disk='c:\robust\robustrestore.bak'
go
insert into steps (steptime) values (current_timestamp)
go
backup log RobustRestore to disk='c:\robust\robustrestore.trn'
go
insert into steps (steptime) values (current_timestamp)
go
backup log RobustRestore to disk='c:\robust\robustrestore2.trn'
go
insert into steps (steptime) values (current_timestamp)
go
backup log RobustRestore to disk='c:\robust\robustrestore.trn'
go
insert into steps (steptime) values (current_timestamp)
go
backup log RobustRestore to disk='c:\robust\robustrestore2.trn'
go

We now have a backup chain, and our transaction logs are interleaved across 2 files. Not something you’d want to see in production, but suitably complex for a simple demonstration.

First we use our disk scraping technique to load our object with all the backup information from the files. The function definitions are here – Day 14 of 31 Days of SQL Server Backup and Restore using PowerShell: Reading Backup information into a custom object

$backupfiles = @()
$backups = @()

$backupfiles += Get-SQLBackupFiles -FilePath c:\robust -SQLServer "Server1"

foreach($file in $BackupFiles){
    $backups +=Get-BackupContents -FilePath $file -SQLServer "Server1"
}

Assuming you get no errors, then we now have a $backups object with 5 entries, 1 for each of the backups. So how are we going to use this to run a restore, and make sure we get everything in the right order. Well, let’s take a look at one of the entries:

PS C:\> $backups.item(1)

FilePath               : C:\robust\robustrestore.trn
BackupName             :
BackupDescription      :
BackupType             : 2
ExpirationDate         :
Compressed             : 0
Position               : 1
DeviceType             : 2
UserName               : Server1\Administrator
ServerName             : Server1
DatabaseName           : RobustRestore
DatabaseVersion        : 706
DatabaseCreationDate   : 18/09/2013 09:14:01
BackupSize             : 77824
FirstLSN               : 40000000007600178
LastLSN                : 40000000016300001
CheckpointLSN          : 40000000007600178
DatabaseBackupLSN      : 40000000007600178
BackupStartDate        : 18/09/2013 09:14:02
BackupFinishDate       : 18/09/2013 09:14:02
SortOrder              : 0
CodePage               : 0
UnicodeLocaleId        : 1033
UnicodeComparisonStyle : 196609
CompatibilityLevel     : 110
SoftwareVendorId       : 4608
SoftwareVersionMajor   : 11
SoftwareVersionMinor   : 0
SoftwareVersionBuild   : 2100
MachineName            : Server1
Flags                  : 512
BindingID              : 815b85e5-d051-4afd-a192-bb9fb2f7302c
RecoveryForkID         : df243f5b-a451-427e-8622-ecc7996a390a
Collation              : Latin1_General_CI_AS
FamilyGUID             : df243f5b-a451-427e-8622-ecc7996a390a
HasBulkLoggedData      : False
IsSnapshot             : False
IsReadOnly             : False
IsSingleUser           : False
HasBackupChecksums     : False
IsDamaged              : False
BeginsLogChain         : True
HasIncompleteMetaData  : False
IsForceOffline         : False
IsCopyOnly             : False
FirstRecoveryForkID    : df243f5b-a451-427e-8622-ecc7996a390a
ForkPointLSN           :
RecoveryModel          : FULL
DifferentialBaseLSN    :
DifferentialBaseGUID   :
BackupTypeDescription  : Transaction Log
BackupSetGUID          : 2d5edca4-7ed2-473d-b2ca-124f9cca8784
CompressedBackupSize   : 77824
Containment            : 0

So we can see that we have all the information we could want in there to build up our backups. We can use BackupType or BackupTypeDescription to determine what sort of restore (database or log) we need, we have FilePath to tell us where the file is, Position tells us where in that file the particular backup is and DatabaseName let’s us confirm that we’re restoring a backup from the correct database.

But how do we ensure we’re restoring the backups in the right order? We have 2 options here, Times or LSN. We could order by BackupStartDate or BackupFinishDate, and most times this would be absolutely fine. But if we have backups that take a particularly long time to complete you can end up with backups happening within the window of other backups.

A more reliable method is to use Log Sequence Numbers (LSNs). LSNs are strictly increasing number that SQL Server assigns to every transaction, and this is written in the transaction log when the transaction is committed. LSNs are not contiguous or of any use for arithmetic (ie; subtracting FirstLSN from LastLSN for a Transaction log backup won’t tell you how many transactions the backup contains). What they do guarantee is that if a_LSN is greate than b_LSN then a_LSN was committed to the transaction log AFTER b_LSN. So this makes it ideal for working out the order to restore our files. For more information on LSNs, please see Technet for an Introduction to Log Sequence Numbers

So knowing that, we can now look at out restore script:

$RestoreServer = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server("RestoreServer")
$devicetype = [Microsoft.SqlServer.Management.Smo.DeviceType]::File
foreach ($backup in $backups | sort-object LastLSN){
    $restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore
    $restoredevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($backup.filepath,$devicetype)
    $restore.devices.add($restoredevice)
    $restore.FileNumber = $backup.position
    $restore.Database = $backup.DatabaseName
    if ($backup.LastLSN -eq $backups[-1].LastLSN){
        $restore.NoRecovery = $False
    }else{
        $restore.NoRecovery = $True
    }
    $restore.SqlRestore($RestoreServer)

    $restore.Devices.Remove($restoredevice)
}

As you can see, we can now dynamically populate everything from our $Backups object. Except for the SQL Server we want to restore to.

In that example we were assuming that we only have backups from one database, and even just the one SQL Server in the folder we’d scanned. By altering the filtering in the foreach we can ensure we only see the details we want to see. For example, if you wanted to restore the backup of Database “dbprod” from SQL Server “DBServer1” then you’d do the following:

$RestoreServer = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server("RestoreServer")
$devicetype = [Microsoft.SqlServer.Management.Smo.DeviceType]::File
foreach ($backup in $backups | Where-Object {($_.DatabaseName -eq "dbprod") -and ($_.ServerName -eq "DBServer1")} | sort-object LastLSN  ){
    $restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore
    $restoredevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($backup.filepath,$devicetype)
    $restore.devices.add($restoredevice)
    $restore.FileNumber = $backup.position
    $restore.Database = $backup.DatabaseName
    if ($backup.LastLSN -eq $backups[-1].LastLSN){
        $restore.NoRecovery = $False
    }else{
        $restore.NoRecovery = $True
    }
    $restore.SqlRestore($RestoreServer)

    $restore.Devices.Remove($restoredevice)
}

From that it can be seen that we can really leverage PowerShell’s built in cmdlets to extract just the restore information we need from a whole range.

Tomorrow we’ll look at how we can work with multiple days worth of data, and also bring Differential backups into the picture.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Day 15 of 31 Days of SQL Server Backup and Restore using PowerShell: Getting Backup information from SQL Server

Yesterday we looked at scraping Backup information from SQL Server backup files on disk. There is another source of information available. Every backup in SQL Server is also recorded within tables in msdb. For the simple examples we’re looking at  so far, we just need to consider 2 of them. These are:

  • msdb.dbo.backupset – Holds the information about every backup
  • msdb.dbo.backupmediafamily – Holds the location

There are a number of other tables that also contain information that is needed when dealing with File or Filegroup backups, or with backups that have been spanned across multiple mediasets

The only issue with using this data as opposed to the data ‘scraped’ from disk, is that most people will manage their backup files outside of SQL Server, so there’s no guarantee that the file is where SQL Server left it. It could have been arhived, compressed or just plain deleted. This is the case even if you’re using the ‘Maintenance Cleanup Task’ in a Maintenance plan to delete your old backups. Running the ‘History Cleanup Task’ will prune your backup history down to your specifications (SQL Server won’t do this unless told to, and I have seen MSDB dbs that have ballooned to a ridiculous size with years of backup records!). Because of this you may want to filter the data you retrieve from SQL Server by your retention policy, and also checking that any file you reference does in fact exist will avoid problems.

The data returned from the ReadBackupHeader and msdb.dbo.backupset is VERY similar, but with differences in naming convention. And the enumerators for BackupType | Type need a CASE statement to translate them. The basic query we’ll be running is:

select
 a.backup_finish_date as BackupFinishDate,
 a.backup_set_uuid as BackupSetGUID,
 a.backup_size as BackupSize,
 a.backup_start_date as BackupStartDate,
 a.begins_log_chain as BeginsLogChain,
 a.checkpoint_lsn as CheckpointLSN,
 a.collation_name as Collation,
 a.compatibility_level as CompatibilityLevel,
 a.compressed_backup_size as CompressBackupSize,
 a.database_backup_lsn as DatabaseBackupLSN,
 a.database_creation_date as DatabaseCreationDate,
 a.database_name as DatabaseName,
 a.database_version as DatabaseVersion,
 a.description as BackupDescritpion,
 a.differential_base_guid as DifferentialBaseGUID,
 a.differential_base_lsn as DifferentialBaseLSN,
 a.expiration_date as ExpirationDate,
 a.family_guid as FamilyGUID,
 a.first_lsn as FirstLSN,
 a.first_recovery_fork_guid as FirstRecoveryForkID,
 a.flags as Flags,
 a.fork_point_lsn as ForkPointLSN,
 a.has_backup_checksums as HasBackupChecksums,
 a.has_bulk_logged_data as HasBulkLoggedData,
 a.has_incomplete_metadata as HasIncompleteMetaData,
 a.is_copy_only as IsCopyOnly,
 a.is_damaged as IsDamaged,
 a.is_force_offline as IsForceOffline,
 a.is_password_protected,
 a.is_readonly as IsReadOnly,
 a.is_single_user as IsSingleUser,
 a.is_snapshot as IsSnapshot,
 a.last_family_number,
 a.last_lsn as LastLSN,
 a.last_media_number,
 a.machine_name as MachineName,
 a.name as BackupName,
 a.position as Position,
 a.recovery_model as RecoveryModel,
 a.server_name as ServeName,
 a.software_build_version as SoftwareVersionBuild,
 a.software_major_version as SoftwareVersionMajor,
 a.software_minor_version as SoftwareVersionMinor,
 a.sort_order as SortOrder,
 a.unicode_compare_style as UnicodeComparisonStyle,
 a.unicode_locale as UnicodeLocalID,
 a.user_name as UserName,
 case a.type
	when 'D' then 'Database'
	when 'I' then 'Differential database'
	when 'L' then 'Log'
	when 'F' then 'File or filegroup'
	when 'G' then 'Differential file'
	when 'P' then 'Partial'
	when 'Q' then 'Differential partial'
 end as BackupTypeDescription,
 case [type]
	when 'D' then 1
	when 'I' then 5
	when 'L' then 2
	when 'F' then 4
	when 'G' then 6
	when 'P' then 7
	when 'Q' then 8
 end as BackupType,
 b.physical_device_name as 'FilePath'
 from
    msdb.dbo.backupset a inner join msdb.dbo.backupmediafamily
        on a.media_set_id=b.media_set_id B

To fill our data from SQL Server we’ll create 2 functions:

  1. Get-SQLServerBackups
  2. Test-SQLServerBackupExistence

The first will retrieve a set of backup information from a specified SQL Sever instance, optionally filtering the data by a specific date or by only asking for the last x days.

Our 2nd function will take in an instance of our $Backups object, and will step through testing to see each file exists, removing those that don’t.

So with our first function:

function Get-SQLServerBackups
{param([string]$SQLServer,[int]$ExcludeDaysOlder=0,[string]$BackupNewerThan='01/01/1900')
push-location
import-module "SQLPS" -DisableNameChecking
pop-location

$sqlsvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server($SQLServer)

$sql="select a.backup_finish_date as BackupFinishDate,
 a.backup_set_uuid as BackupSetGUID,
 a.backup_size as BackupSize,
 a.backup_start_date as BackupStartDate,
 a.begins_log_chain as BeginsLogChain,
 a.checkpoint_lsn as CheckpointLSN,
 a.collation_name as Collation,
 a.compatibility_level as CompatibilityLevel,
 a.compressed_backup_size as CompressBackupSize,
 a.database_backup_lsn as DatabaseBackupLSN,
 a.database_creation_date as DatabaseCreationDate,
 a.database_name as DatabaseName,
 a.database_version as DatabaseVersion,
 a.description as BackupDescritpion,
 a.differential_base_guid as DifferentialBaseGUID,
 a.differential_base_lsn as DifferentialBaseLSN,
 a.expiration_date as ExpirationDate,
 a.family_guid as FamilyGUID,
 a.first_lsn as FirstLSN,
 a.first_recovery_fork_guid as FirstRecoveryForkID,
 a.flags as Flags,
 a.fork_point_lsn as ForkPointLSN,
 a.has_backup_checksums as HasBackupChecksums,
 a.has_bulk_logged_data as HasBulkLoggedData,
 a.has_incomplete_metadata as HasIncompleteMetaData,
 a.is_copy_only as IsCopyOnly,
 a.is_damaged as IsDamaged,
 a.is_force_offline as IsForceOffline,
 a.is_password_protected,
 a.is_readonly as IsReadOnly,
 a.is_single_user as IsSingleUser,
 a.is_snapshot as IsSnapshot,
 a.last_family_number,
 a.last_lsn as LastLSN,
 a.last_media_number,
 a.machine_name as MachineName,
 a.name as BackupName,
 a.position as Position,
 a.recovery_model as RecoveryModel,
 a.server_name as ServeName,
 a.software_build_version as SoftwareVersionBuild,
 a.software_major_version as SoftwareVersionMajor,
 a.software_minor_version as SoftwareVersionMinor,
 a.sort_order as SortOrder,
 a.unicode_compare_style as UnicodeComparisonStyle,
 a.unicode_locale as UnicodeLocalID,
 a.user_name as UserName,
 case a.type
	when 'D' then 'Database'
	when 'I' then 'Differential database'
	when 'L' then 'Log'
	when 'F' then 'File or filegroup'
	when 'G' then 'Differential file'
	when 'P' then 'Partial'
	when 'Q' then 'Differential partial'
	end as BackupTypeDescription,
	case [type]
	when 'D' then 1
	when 'I' then 5
	when 'L' then 2
	when 'F' then 4
	when 'G' then 6
	when 'P' then 7
	when 'Q' then 8
	end as BackupType,
	b.physical_device_name as 'FilePath'
	 from msdb.dbo.backupset a inner join msdb.dbo.backupmediafamily b on a.media_set_id=b.media_set_id
where 1=1"

if ($ExcludeDaysOlder -ne 0){
    $sql = $sql+" and a.backup_start_date>dateadd(dd,-"+$ExcludeDaysOlder.tostring()+",current_timestamp)"
}

if ($BackupNewerThan -gt '01/01/1900'){
    $sql = $sql+" and a.backup_start_date>'"+$BackupNewerThan+"'"
}

$dt = Invoke-Sqlcmd -ServerInstance $SQLServer -Query $sql

return $dt
}

This is a very simple function, it just looks huge due to the SQL Statement $sql. If the parameters are supplied then we append the appropriate where filter. Then the query is sent to the SQL Server using Invoke-SQL cmdlet, this returns the data in a DataTable which we then return.

And then our second function:

function Test-SQLServerBackupExistence
{param ($BackupArray)

Set-location c:\

foreach ($row in $BackupArray.filepath | sort -Unique){
    if ((Test-Path $row) -ne $true){
        $BackupArray = $BackupArray | Where-Object {$_.FilePath -ne $row}
     }
}
return $BackupArray
}

This takes in an array as a parameter. By piping the $BackupArray.FilePath property through Sort -unique we only need to check each file once. If it doesn’t exist then we remove all ‘rows’ with that FilePath from the array. We then return the remains of the array.

The one drawback with this method is that if the same file is being used repeatedly for backups, then all those backups will stay in the array as we are only checking the existence of the file not seeing what it actually contains.

To load the data we’d call the functions as follows:

$Backups = Get-SQLServerBackups -SQLServer Server1 -ExcludeDaysOlder 7
$Backups = Test-SQLSeverBackupExistence $Backups

As we’ve ensured that we’ve kept the same structure for the Data.Tables returned from the disk scrape and the SQL query we can combine them easily like so:

$Backups = Get-SQLServerBackups -SQLServer Server1 -ExcludeDaysOlder 7
$Backups = Test-SQLSeverBackupExistence $Backups
$BackupFiles += Get-SQLBackupFiles -FilePath c:\psbackups2 -SQLServer "Server2"
foreach($File in $BackupFiles){
    $Backups += Get-BackupContents -FilePath $Sile -SQLServer "Server1"
}

As we’re using a standard PowerShell Data Table we can also us standard PowerShell cmdlets to save it to disk, and then reload it at a later date. This means we can scrape disks overnight during periods of low usage and then load it up during the day. Or we can update our records from SQL Server on a daily basis to speed the process up.

To save the information:

$Backups | Export-CliXML c:\backup-store.xml

To load the information:

$Backups = Import-CliXML c:\backup-store.xml

Tomorrow we’ll be looking at how we can use this new objects to ensure we restore the right files in the right order in a more robust manner than just relying on file names and last modified dates.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Day 14 of 31 Days of SQL Server Backup and Restore using PowerShell: Reading Backup information into a custom object

Yesterday we looked at finding SQL Server Backup files and then examining their contents. Today we’re going to look at how we can best record this information for later use in our scripts.

We’re going to create 2 functions:

  1. Scan a given folder structure for SQL Server Backup files, and record them in a global variable
  2. Read a given file and return the SQL Server Backups contained in it

Function 1 will be Get-SQLBackupFiles:

function Get-SQLBackupFiles
{param ([string]$FilePath, [string]$SQLServer)
    import-module "SQLPS" -DisableNameChecking
    $tmpout = @()
    $sqlsvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server($SQLServer)
    $restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore
    $devicetype = [Microsoft.SqlServer.Management.Smo.DeviceType]::File

    $files = Get-ChildItem $FilePath

    foreach ($file in $files){
        $restoredevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($file.FullName,$devicetype)
        $restore.Devices.add($restoredevice) | Out-Null
        $errcnt=0
        try{
            $restore.ReadMediaHeader($sqlsvr) | Out-Null
        }
        catch [System.Exception]{
             $errcnt =1
        }
        finally {
            if ($errcnt -ne 1){
                $tmpout += $file.FullName
            }
            $errcnt = 0
        }
        $restore.Devices.remove($restoredevice) | out-null
        Remove-Variable restoredevice
    }
    return $tmpout
 }

This is based on the scriplet we looked at yesterday. We pass in the folder we want to search, and the SQL Server instance we want to run the query on. Remember that the SQL Server service on that server will need to have read access to the file are you’re scanning. The function returns an array of all the files that have the correct headers. Out-Null is appended to many of the lines to prevent the output from those commands coming through in the return output.

By having this as a discrete function, we could also make use of the Start-Job asynchronous method we looked at in Day 8 ( Asynchronous Backups – Part 2 – PowerShell Jobs) if we wanted to scan lots of files by splitting the areas into logical divisions and then scanning them in parallel across a number of SQL instances.

Our 2nd function is Get-BackupContents:

function Get-BackupContents
{param ([string]$FilePath,[string]$SQLServer)
    import-module "SQLPS" -DisableNameChecking

    $sqlsvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server($SQLServer)
    $restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore
    $devicetype = [Microsoft.SqlServer.Management.Smo.DeviceType]::File

    $restoredevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($FilePath,$devicetype)
    $restore.Devices.add($restoredevice) | Out-Null
    $Temp = $Restore.ReadBackupHeader($sqlsvr)
    $Temp | Add-Member -MemberType NoteProperty -Name FilePath -value $FilePath
    $restore.Devices.Remove($restoredevice) | Out-Null
    return $temp
}

Here we pass in a a filepath to the backup file, and a SQL Server instance to use to read the backup. We store the output in a temporary object, and also add the filepath to the object before returning the object

So we can call these like so:

$BackupFiles = @()
$Backups = @()

$BackupFiles += Get-SQLBackupFiles -FilePath c:\psbackups -SQLServer "Server1"
$BackupFiles += Get-SQLBackupFiles -FilePath c:\psbackups2 -SQLServer "Server2"

foreach($File in $BackupFiles){
    $Backups += Get-BackupContents -FilePath $Sile -SQLServer "Server1"
}

So we now have a PowerShell object $Backups which contains all of our backup information. This means we can now use PowerShell’s standard cmdlets to pull information back. For example, this code snippet:

$backups | Where-Object {($_.DatabaseName -eq "psdb1") -and ($_.BackupTypeDescription -eq "Database")} | Sort-Object $_.BackupStartDate -Descending | Format-List -Property DataBaseName, FilePath, BackupTypeDescription, BackupStartDate, BackupFinishDate

will find all of the Full Database backups of psdb1, and return them in descending date order, and then we can simply extracted the information we’re interested in

Tomorrow we’ll look at getting the same information from the original SQL Server that performed the backups and putting it into the same object. We’ll also cover writing this information back to a SQL Server, so you can reuse the information without having to scan the disks every time.

Then on Thursday (Day 16) we’ll look at how we can use this object to ensure a we restore backups in the correct order, and also allow us to start working towards being able to perfmon a Point In Time restore to any random point in time for which we have backups available.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Day 13 of 31 Days of SQL Server Backup and Restore using PowerShell: Reading the contents of a Backup File

Yesterday we saw how easy it is to restore a SQL Server database from backups with PowerShell if we have a folder that contains:

  • Only SQL backup files
  • All files are in chronological/LSN order
  • The oldest file is the Full backup
  • There are no differential or partial backups in the mix
  • Only one database’s backups
  • Each backup file only contains 1 backups

Unfortunately we don’t always get that. So we need to be able to find out what we have and work out the the best way to restore our database. Over the next 2 days I’m going to introduce how we can read all the files we have, extract the information we need and then store it in a custom PowerShell object so we can manipulate it for ourselves. We will then look at how we can populate this object if we’re lucky enough to also have access to the original server.

Today we’ll look at reading the contents of the backup files and finding out what’s inside.

Let’s start with just working out which files in a folder are actually of interest to us. Let’s start with the worst case scenario; a folder full of files, some of which may be SQL Server backups of one kind or another, but someone hasn’t put file extensions on or has been very lax with which ones they have done. Like so:

PS C:\psbackups> Get-ChildItem

    Directory: C:\psbackups

Mode          LastWriteTime     Length Name
----          -------------     ------ -----

-a---  14/09/2013     07:46          0 file1
-a---  14/09/2013     07:46    5605888 file2

PS C:\psbackups>

OK, in this example you can probably deduce which one is not a backup file! But if you had folders full of these how would you work out which ones were of interest? Well, you’d use the ReadMediaHeader method of your $Restore object to quickly try to read the header that SQL Server writes at the start of all it’s backup files. This is a very quick operation, even on huge backup files as it’s at the start of the file. Unfortunately, ReadMediaHeader doesn’t return a nice “No this isn’t a SQL Server backup” message, so we have to wrap it in a Try, Catch, Finally block and handle it ourselves:

import-module "SQLPS" -DisableNameChecking

$sqlsvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server("WIN-C0BP65U3D4G")
$restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore
$devicetype = [Microsoft.SqlServer.Management.Smo.DeviceType]::File

$files = gci C:\psbackups

$BackupFiles = @()

foreach ($file in $files){
    $restoredevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($file.FullName,$devicetype)

    $restore.Devices.add($restoredevice)
    $errcnt=0
    try{
        $restore.ReadMediaHeader($sqlsvr)
    }
    catch [System.Exception]{
        write-output "$file is not a sql backup file `n"
        $errcnt =1
    }
    finally {
        if ($errcnt -ne 1){
            write-output "$file is a sql backup file `n"
            $BackupFiles += $file
        }
        $errcnt = 0
    }
    $restore.Devices.remove($restoredevice)
    Remove-Variable restoredevice
}

We add the positive matches to a $BackupFiles array so we can refer to them later.

So, what can you do if someone’s running their backups like this:

backup database psdb1 to disk='c:\psbackups\psbd1_backup.bak'
backup log psdb1 to disk='c:\psbackups\psbd1_backup.bak'
backup database psdb1 to disk='c:\psbackups\psbd1_backup.bak'
backup log psdb1 to disk='c:\psbackups\psbd1_backup.bak'

I have seen something similar out in the wild, and they had a BIG backup file for quite a small database, and making sure you restored the version you wanted took a lot of looking up. In this case we use PowerShell’s implement of RESTORE HEADERONLY, ReadBackupHeader:

Import-Module "SQLPS" -DisableNameChecking

$SQLSvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server("WIN-C0BP65U3D4G")
$Restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore
$DeviceType = [Microsoft.SqlServer.Management.Smo.DeviceType]::File

$file = gci c:\psbackups\psbd1_backup.bak

$RestoreDevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($file.FullName,$devicetype)
$Restore.Devices.Ddd($RestoreDevice)
$Restore.ReadBackupHeader($sqlsvr) | Format-List -Property DatabaseName, BackupStartDate, FirstLSN, BackupTypeDescription, Position
$Restore.Devices.Remove($restoredevice)

Here we pipe the output from ReadBackupHeader through Format-List to extract just the values we want:

DatabaseName          : psdb1
BackupStartDate       : 16/09/2013 07:18:37
FirstLSN              : 40000000049700037
BackupTypeDescription : Database
Position              : 1

DatabaseName          : psdb1
BackupStartDate       : 16/09/2013 07:18:37
FirstLSN              : 40000000049000001
BackupTypeDescription : Transaction Log
Position              : 2

DatabaseName          : psdb1
BackupStartDate       : 16/09/2013 07:18:37
FirstLSN              : 41000000004700037
BackupTypeDescription : Database
Position              : 3

DatabaseName          : psdb1
BackupStartDate       : 16/09/2013 07:18:38
FirstLSN              : 41000000002600001
BackupTypeDescription : Transaction Log
Position              : 4

ReadBackupHeader returns the same information as RESTORE HEADERONLY does.

Not that we have the position of the backups with the file we can restore the one we want by using the FileNumber property:

Import-Module "SQLPS" -DisableNameChecking

$SQLSvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server("WIN-C0BP65U3D4G")
$Restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore
$DeviceType = [Microsoft.SqlServer.Management.Smo.DeviceType]::File

$file = gci c:\psbackups\psbd1_backup.bak

$restoredevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($file.FullName,$devicetype)
$Restore.Devices.add($restoredevice)
$Restore.DatabaseName = "psdb1"
$Restore.FileNumber = 3
$Restore.SQLRestore($SQLSvr)
$restore.Devices.Remove($restoredevice)

This will restore the most recent full backup from our example

So now we can find valid SQL Server backups on a given filesystem, and then find out what’s contained within them. Tomorrow we’ll look at taking this information, and putting it into a custom object so we can collate the results from multiple files and build up our restore plan.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Day 12 of 31 Days of SQL Server Backup and Restore using PowerShell: Simple Transaction Restore

Yesterday we looked at simplest possible example of a SQL Server restore; restoring a single backup file. Now we’re going to get a little bit smarter, and also restore transaction logs.

So the scenario is that you have folder containing your full database backup and the subsequent transaction logs, like so:

PS C:\> gci c:\psbackups

    Directory: C:\psbackups

Mode                LastWriteTime     Length Name
----                -------------     ------ ----
-a---        13/09/2013     11:23    2648576 psbd1_backup_201309131100.bak
-a---        13/09/2013     11:23    1800704 psbd1_backup_201309131105.trn
-a---        13/09/2013     11:23      90624 psbd1_backup_201309131110.trn
-a---        13/09/2013     11:23      90624 psbd1_backup_201309131115.trn

PS C:\>

So how do we do this? Well, exactly how we’d do it in T-SQL. Restore the Full backup with NoRecovery and then restore all the transaction log backups in order, also with no recovery, and then recover the database after the last one on has been restored.

Normally that means writing lots of lines of T-SQL (it’s always just before the next full backup isn’t it, and you have 92 transaction logs to restore!). Thanks to PowerShell’s objects we can do better than than.

So to the script:

Import-Module SQLPS -DisableNameChecking

$SQLSvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server -ArgumentList ("Server1")
$Restore = New-Object -TypeName Microsoft.SQLServer.Management.Smo.Restore
$DeviceType =[Microsoft.SqlServer.Management.Smo.DeviceType]::File

$Restore.Database='psrestore'
$Restore.NoRecovery = $TRUE

$Files = Get-ChildItem C:\psbackups\
$i=1

foreach ($File in $Files){
	if($i -eq $Files.count){
	    $Restore.NoRecovery= $FALSE
	}
	$BackupDevice = New-Object Microsoft.SqlServer.Management.Smo.BackupDeviceItem -ArgumentList $file.FullName, $DeviceType
	$Restore.Devices.Add($BackupDevice)
	$Restore.SQLRestore($SQLSvr)
	$Restore.Devices.Remove($BackupDevice)
	remove-variable BackupDevice
	$i++
}

First we do our normal setup, loading the SMO module and creating some basic objects. Next we set the name of the database we’re restoring, and also tell our $Restore object that we don’t want to recover the database at the end of the Restore (the default option is to recover them).

We use the Get-ChildItem cmdlet to load the contents of our backup directory into an object called $Files, and create a counter variable $i and initialise it to 1.

We now step through each $File in our $Files object. We use the $Files.count property and $i to work out when we are restoring the last file in the list, and when we are we set $Restore.NoRecovery to $FALSE, thereby recovering the database after the restore.

We restore the file and increment the counter variable by 1, and then go through the loop again if we aren’t on the last file.

We can do the same with the Restore-SQLDatabase cmdlet introduced with SQL Server 2012

Import-Module SQLPS -DisableNameChecking

$ServerName = "Server1"

$RestoreDatabase='psrestore'
$Files = Get-ChildItem C:\psbackups\
$i=1

foreach ($File in $Files){
    Restore-SqlDatabase -ServerInstance  $ServerName -Database $RestoreDatabase -BackupFile $File.FullName -NoRecovery

    if($i -eq $Files.count){
    Restore-SqlDatabase -ServerInstance  $ServerName -Database $RestoreDatabase -BackupFile $File.FullName
    }
    $i++
}

When we use this cmdlet we can’t set the NoRecovery property as we did with SMO, as it’s not a parameter but a switch. So we us the same logic to find when we’re restoring the last file, and then run the cmdlet again without the -NoRecovery switch.

This fairly straightforward, and simple to write it you know that you only have the files for 1 database in a folder, and they’re in correct chronological/LSN order with the full backup being the oldest. But what do you do if you just have a folder full of random files from multiple databases from multiple days with a mix of full, differential and transaction backups? Well, that’s what we’re going to explore over the next couple of days. Tomorrow we’ll begin by looking at how to look inside the backup files.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Page 2 of 4

Powered by WordPress & Theme by Anders Norén