So far during the restores portion of this series we’ve been assuming that we can restore our database’s file back to the same locations they were on the original server. Often this is not the case, even on a dedicated restore server:
- there’s the possibility that the files already exist
- a good chance that the restore server’s filesystems are laid out differently
- you may want to be able to restore the same database to multiple points in time
- or many other reasons
So today we’re going to look at how your can quickly relocate every file of a restored database without knowing anything about them prior to the restore.
To do this we’re going to use another method of the SMO Restore object, ReadFileList
. This works in the same manner as the T-SQL RESTORE FILELISTONLY
. When run against a backup file it will list all the database files contained within it. Under PowerShell this will return a DataTable object which we can loop through to identify every file, We can use this information, and details we want to provide to build a RelocateFile
object, which we can then pass into the RelocateFiles
method of Restore
For all the examples in todays post we’ll assume that the following PowerShell has been run to build the basic Restore environment:
Import-Module sqlps -DisableNameChecking ."c:\PSScripts\SQLBackupFunctions.ps1" $backupfiles = @() $backups = @() $backupfiles += Get-SQLBackupFiles -FilePath c:\DiffRestore -SQLServer "Server1" foreach($file in $BackupFiles){ $backups +=Get-BackupContents -FilePath $file -SQLServer "Server1" } $RestoreFiles += $Backups | Where-Object {$_.BackupTypeDescription -eq "Database"} | Sort-Object LastLSN | Select-Object -First 1 $RestoreFiles += $Backups | Where-Object {$_.BackupTypeDescription -eq "Database Differential"} | Sort-Object LastLSN | Select-Object -First 1 $tmpLSN = $RestoreFiles | Measure-Object -Property LastLSN -Maximum $RestoreFiles += $Backups | Where-Object {($_.LastLSN -ge $tmpLSNMaximum) -and ($_.BackupTypeDescription -eq "Transaction Log")} $restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore $devicetype = [Microsoft.SqlServer.Management.Smo.DeviceType]::File $HasThereBeenAnError = 0
As a basic example we’ll relocate all the database files in a backup file to c:\restorefiles\
:
$restoredevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($RestoreFiles[0].FilePath,$devicetype) $DBFiles = $restore.ReadFileList($SQLsvr) if (($DBFiles | Measure-Object -Property IsPresent -average).Average -eq 1){ foreach($File in $DBFiles){ $rf = new-object -typename Microsoft.SqlServer.Management.Smo.RelocateFile $rf.LogicalFileName = $file.LogicalName $rf.PhysicalFileName = $Path+(Split-Path $File.PhysicalName -Leaf) $Restore.RelocateFiles.Add($rf) Remove-Variable rf } }else{ Write-Output "Does Not contain all db files" break; } $Restore.sqlrestore($SQLSvr) }
We use ReadFileList
to read the contents of our backup file into $DBFiles
. We then use this line to make sure the backup contains all the db files:
if (($RestoreFiles | Measure-Object -Property IsPresent -average).Average -eq 1){
This makes use of the fact that PowerShell treats $True
as 1 and $False
as 0 when you run a mathematical function across them. If all the files are present then the average should be 1, if any files are not present (IsPresent=False) then the average will be less than 1. This will happen if you pick a file that contains a File or FileGroup Backup. We’ll cover performing File and FileGroup restores in a later article, but will pretend they don’t exist for this one!
Assuming we’ve got a backup with all the files, we then loop through the data table. For each file we create a new RelocateFile
object. We populate the LogicalFileName
property directly and then build up a new path for the PhysicalFileName
. We use the Split-Path
cmdlet with it’s -leaf
switch to split the filename and extension from the rest of the path (ie; Split-Path "c:\test\file.txt" -path
would return file.txt). We then add the RelocateFile object to the RelocateFiles method of our Restore object.
The ReadFileList
method can also be used check the size of the resulting restore. When SQL Server backs up a database it only writes out the data pages ignoring empty space in the data files. So a database that has 250GB allocated on disk, but only contain 120GB of data will produce a smaller backup than another database which 250GB allocated on disk, but only contain 240GB of data. However, when SQL Server restores a database it restores the database files to the size they were when the backup was taken, complete with empty space. So in the first case, despite only having 120GB of data in your database, you’re going to have to have 250GB of spare disk space to restore the database. And despite all the pleas on dba.stackexchange.com there’s no way around that. The other problem is that as you roll forward transaction backups the required amound of space on disk can change, so just because you can restore the full backup it doesn’t follow that you can restore the rest of the chain.
By running ReadFileList
across all the files to be restored we can calculate the maximum filesize for each file in the database, and place them on filesystems with enough space or decided not to progress with a restore that’s doomed to failure. We can do this as follows:
$DBfiles = @{} $DBfileSize = @() foreach ($file in $RestoreFiles){ $sqlsvr = New-Object -TypeName Microsoft.SQLServer.Management.Smo.Server($SQLServer) $restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore $devicetype = [Microsoft.SqlServer.Management.Smo.DeviceType]::File $restoredevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($file.FilePath,$devicetype) $restore.Devices.add($restoredevice) $dbfiles = $Restore.ReadFileList($sqlsvr) foreach ($tmpfile in $DBfiles){ if (($dbfilesize.logicalName -Contains $tmpfile.logicalname) -eq $False){ $tmpobj = New-Object System.Object $tmpobj | Add-Member -type NoteProperty -name LogicalName -value $tmpfile.LogicalName $tmpobj | Add-Member -type NoteProperty -name PhysicalName -value $tmpfile.PhysicalName $tmpobj | Add-Member -type NoteProperty -name Size -value $tmpfile.Size $DBFileSize += $tmpobj } if ($filet.size -gt ($dbfilesize | Where-Object {$_.LogicalName -eq $filet.LogicalName}).size){ ($dbfilesize | Where-Object {$_.LogicalName -eq $filet.LogicalName}).size = $filet.size } } $restore.Devices.Remove($restoredevice) } $RestoreLocations = @("c:\restores\","d:\restores\","e:\restores\") $LocationCount = 0 $MarginForError = 200*1024*1024 $FSO = New-Object -Com Scripting.FileSystemObject $RelocateFile = @() $CurrentlyNeeded = 0 $i = 0 while ($i -lt $RestoreLocations.Count){ $CurrentlyNeeded[$i]=0 $i++ } $i=0 while ($i -lt $DBfileSize.count){ if ($DBFileSize[$i].size + $MarginForError + $CurrentlyNeeded[$LocationCount] -gt $FSO.getdrive($(Split-Path $RestoreLocations[$LocationCount] -Qualifier)).AvailableSpace){ $LocationCount++ if($LocationCount = $RestoreLocations.Count){ write-output "No Space anywhere for this file!" break; } }else{ $rf = new-object -typename Microsoft.SqlServer.Management.Smo.RelocateFile $rf.LogicalFileName = $DBFileSize[$i].LogicalName $rf.PhysicalFileName = $RestoreLocations[$LocationCount]+(Split-Path $DBFileSize[$i].PhysicalName -Leaf) $RelocateFile += $rf Remove-Variable rf $CurrentlyNeeded[$LocationCount] += $DBfileSize[$i].size $i++ $LocationCount = 0 } } $RestoreServer = New-Object -TypeName Microsoft.SQLServer.Management.Smo.Server("RestoreServer") $devicetype = [Microsoft.SqlServer.Management.Smo.DeviceType]::File foreach ($backup in $RestoreFiles | sort-object LastLSN){ $restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore $restoredevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($backup.filepath,$devicetype) $restore.devices.add($restoredevice) $restore.FileNumber = $backup.position $restore.Database = $backup.DatabaseName foreach ($f in $RelocateFile){ $restore.RelocateFiles.add($f) } if ($backup.LastLSN -eq $backups[-1].LastLSN){ $restore.NoRecovery = $False }else{ $restore.NoRecovery = $True } $restore.SqlRestore($RestoreServer) $restore.Devices.Remove($restoredevice) }
This snippet could potentially take a while to run as it has to read the filelist from every backup file passed into it, so if you’ve got 100 transaction logs to process after your Full Backup be prepared for a bit of a wait!
Quite a lot going on in here. We start off by declaring a hash table $DBfiles
and an array $DBFileSize
. Next we loop through our list of backup files in $RestoreFiles
. For each one we call ReadFileList
and put the restults into $dbfiles
. We then loop through each file returned. If we’ve not seen the particular file before, we create an object holding it’s LogicalName, PhysicalName and Size. This Object is then added to the $DBFileSize
array. If we have seen the file before then we compare the size returned by ReadFileList to the one in $DBFileSize
, if the new value is larger we replace the value in $DBFileSize
. And then we go back around the loop.
Once we’ve been through every file, we move on to actually checking our free space. We initialise a couple of variables first:
$RestoreLocations
– An array holding all the allowed locations (1 per drive or we’ll get confused later please!)$LocationCount
– A counter to keep track of which Location from the above we’re currently looking at$MarginForError
– Just to leave some room for anything unexpected happening between checking and restoring, in the example this is set to 200MB$FSO
– FileScripting Object to use to get free drive space$RelocateFile
– An array which will hold out relocation information for pushing into our restore$CurrentlyNeeded
– A hash table we’ll programatically populate next. This is going to be used to keep track of the space our restore is going to use on each Location
Next we populate the hashtable $CurrentlyNeeded
with an entry of 0 for every entry in our $RestoreLocations
array. The we reset our counter $i
to 0 for the next section.
We now loop through $Filesize. We’re using a While loop here as we want to loop through every possible location for every file in the array till we’ve either exhausted the possibilities, or we’ve found somewhere for it to go.
The first thing we do in the loop is to check to see if the size of the current file plus the amount we’re currently looking at using on the current drive plus our MarginForError is less than the amount of free space on the drive containing the folder. If there isn’t we increment our $LocationCount
by 1, and go round the loop again with the same database file. If we increament $LocationCount
beyond the end of our $RestoreLocations
array we write out an error message and stop looping as we don’t have anywhere with enough free space to restore this file.
If on one of the loops we can find some space, then we build a new RelocateFile object ($rf
) and populate with our file details. We add it to our array $RelocateFile
, add the size of the restored file to the space used on the drive we’re going to put it on, and then increment $i
so we move onto the next file, and reset $LocationCount
to 0 so we start from the beginning of our locations again.
Next we use the same restore patten as we used in yesterday’s post, with the addition of:
foreach ($f in $RelocateFile){ $restore.RelocateFiles.add($f) }
to pass our newly created array of file relocations into the restore object one by one. Then all being well, we restore our database to it’s new home!
Today’s is a long post, but the core concepts are in the first half. The longer section afterwards is to give you some hints about how you can use this functionality to really streamline automating your backup.
Tomorrow, we’re going to start looking at restore to point in the database’s life other than the latest backup.
This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.
Leave a Reply