Monthly Archives: September 2013

Day 27 of 31 Days of SQL Server Backup and Restore using PowerShell: Why the Long version?

You might have noticed over the last 26 days that I seem to prefer using the long form of PowerShell backups over the more concise cmdlet versions.

There’s real reason for this, as they’re all building up the same T-SQL BACKUP statements at the end of the day. However, I find that when automating things, building up a backup one option at a time like:

$Restore.Replace=$True
$Restore.NoRecovery = $False

This gives me very easy control of what gets passed in to my backup or restore.:

if ($x -eq y){
    $Restore.Replace=$True
}else{
    $Restore.Replace=$False
}

and so on for all the parameters you might want to add.

But with the newer Restore-SQLDatabase cmdlet you can’t use:

if ($x -eq y){
    $RecoveryOption = $True
}else{
    $RecoveryOption = $False
}
Restore-SQLDatabase -Database $DBName -BackupFile $backupfile -NoRecovery $RecoveryOption

As -NoRecovery is a switch, not a parameter. This means that just specifying the switch turns the feature on or off, there is no actual parameter to pass through to it. This can be a bit limiting when you’re wanting complete control within a flexible set of scripts.

I find the new cmdlets are much easier when you just want to quickly back something up from the command line. Today I used this 2 line to quickly backup all the databases on a dev/test server before we tried something destructive:

cd SQLSERVER:\SQL\DEVBOX1\DEFAULT\DATABASES
ls|%{$_|Backup-SQLDatabase}

Much simpler and quicker to type than anything else I can think of!

(Just in case you can’t read what is a great example of bad code, I’m making use of a couple of shortcuts in PowerShell:
ls is a system alias for Get-ChildItem, which coming from a Unix background I tend to use naturally. And % is an alias for foreach, and doing that on a collection of system objects defaults to stepping through the objects in the collection loading $_ on each iteration. Plus PowerShell doesn’t mind about missing spaces if things aren’t ambiguous. This type of code is great for one offs when you just want to do something quickly, but not something you ever want leave in proper scripts or functions, and is not fun to discover when debugging as it probably means shortcuts were also taken elsewhere!)

So to answer the original question, it’s just down to a personal preference, and using the right tool at the right time.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Tagged ,

Day 26 of 31 Days of SQL Server Backup and Restore using PowerShell: Restoring SQL Server Analysis Services databases

Yesterday we looked at backing up SQL Server Analysis Services databases, and today we’ll look at restoring them.

Just as with backups, prior to SQL Server 2012 SQLPS restores had to be done a little differently. Unlike Backup which was a method of a Database object, Restore is a method of the Server class. This makes sense, as if you’re restoring a database it won’t necessarily exist for you to create an object from. In it’s simplest form you just need to provide the location of the backup file:

Import-Module SQLPS -DisableNameChecking

$SSASInstance = "SSASServer1\Instance1"

$BackupFile = "c:\backups\db1.abf"

$SSASServer = New-Object Microsoft.AnalysisServices.Server
$SSASServer.connect($SSASSInstance)

$SSASServer.Restore($BackupFile)

However the Restore method can take more parameters to allow for more flexibility, in much the same way as we saw with the Backup method yesterday. For Restore these are:

ParameterTypeDescription
BackupFileStringPath to backup file
DatabaseNameStringName of database to restore as (default is to restore with same name as backed up from
OverWriteBooleanWhether to overwrite an existing db, default is False
RestoreLocationAn array of RestoreLoctions for restoring remote dbs
RestoreSecurityRestoreSecurity EnumSpecifies what to do with security roles after the restore (Copy, Ignore, Skip)
PasswordStringPassword to restore a password protected backup
StorageLocationsStringSpecify a different storage location for the restored db

These can also be specified by creating a RestoreInfo object and passing that to the Restore method. Just as with the Backup method yesterday to supply a value you must provide the preceeding values as well, even if it’s just empty strings or NULL.

With the release of the SQL Server 2012 PowerShell provide MS also release a Restore-ASDatabae cmdlet. This can make the restores much simple to automate:

Import-Module SQLPS -DisableNameChecking

$SSASInstance = "SSASServer1\Instance1"
$BackupFiles = Get-ChildItem "c:\SSASbackups\"

foreach ($file in $BackupFiles){
    Restore-ASDatabase -Sever $SSASInstance -RestoreFile $file.fullname -AllowOverwrite
}

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Tagged ,

Day 25 of 31 Days of SQL Server Backup and Restore using PowerShell: Backing up SQL Server Analysis Services databases

So far in this series we’ve only looked at SQL Server database backups, but PowerShell can also be used to backup and restore SQL Server Analysis Server objects.

Prior to the release of the latest version of SQLPS with SQL Server 2012, SSAS backups were achieved via the Backup method of a Analysis Server Database object:

Import-Module SQLPS -DisableNameChecking

$SSASInstance = "SSASServer1\Instance1"
$DBName = "db1"

$BackupFile = $DBName+"_"+[DateTime]::Now.ToString("yyyyMMdd_HHmmss")+".abf"

$SSASServer = New-Object Microsoft.AnalysisServices.Server
$SSASServer.connect($SSASSInstance)

$DB = $SSASServer.databases[$DBName]
$DB.backup($BackupFile)

Not that we don’t have to build an backup object, or any devices. So this is all a lot simpler than a Data Platform backup. However there is less flexibility. Some other options are available as the Backup method is ‘overloaded’ (it has a number of different implementations). All the possible options are listed here.

ParameterTypeAction
FileStringFilename to backup to
allowOverwriteBoolenaWhether the backup can overwrite an existing file, default is now
backupRemotePartitionsBooleanWhether remote partitions should be backuped up, default is no
RemoteLocationsAn array of Analysis Server Backup locationsWhere remote partitions should be backed up to
ApplyCompressionBooleanWhether to compress the backup or not, default is off
PasswordStringPassword to encrypt backup files

Just to make life slightly more awkward you need to specify every parameter before the last one you want to use. So if you want to set Compression on, then you need to specify File, AllowOverwrite, backupRemotePartitions and RemoteLocations as well, even if just specifying empty strings like so:

$DB.backup($BackupFile,$False,$False,NULL,$True)

With SQLPS as provided with SQL Server 2012, there is another way. The CmdLet Backup-ASDatabase was introduced, which works in much the same way as Backup-SQLDatabase:

Backup-ASDatabase -BackupFile $backupfile -Server $SSASInstance -Name $DB -ApplyCompression

This is slightly easier to work with, and can be automated for all the dbs in an instance just like we did for the Data Platform examples:

Import-Module SQLPS -DisableNameChecking

$SSASInstance = "SSASServer1\Instance1"
$SSASServer = New-Object Microsoft.AnalysisServices.Server
$SSASServer.connect($SSASSInstance)

foreach($DB in $SSASServer.databases){
    $BackupFile = $$DB.Name+"_"+[DateTime]::Now.ToString("yyyyMMdd_HHmmss")+".abf"
    Backup-ASDatabase -BackupFile $backupfile -Server $SSASInstance -DataBase $DB -ApplyCompression
}

And as a bonus, this format is backward compatible with SQL Server 2008 and SQL Server 2008R2, you only need the SQL Server 2012 SQLPS module installed, which comes with the SQL Server client tools (free to download and use if you grab SQL Server 2012 Express SP1 or better). So it could make your life easier if you grab this even if you upgrade nothing else to 2012.

Note that all the usual Analysis Services restrictions on what is actually backup up apply when backing up via PowerShell as they would via any other method.

One advantage of backing up SSAS via PowerShell is that it’s very easy to ensure that the underlying datastores are also backed up at the same time.

Tomorrow we’ll look at restoring SSAS Databases.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Tagged ,

Day 24 of 31 Days of SQL Server Backup and Restore using PowerShell: Error Checking

So far most of our scripts have been a little bereft of error checking. This has mainly been to make the easier to read. If you’ve been experimenting with the scripts, then it’s almost certain that you’ll have come across the following unhelpful error message from PowerShell:

Exception calling "SqlBackup" with "1" argument(s): "Backup failed for Server 'Server1'. "
At line:1 char:1
+ $Backup.SqlBackup($SQLSvr)
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [], MethodInvocationException
    + FullyQualifiedErrorId : FailedOperationException

You’ll probably also have discovered that amongst many things it can be a response to:

  • Wrong Server name
  • Permissions error on backup destination
  • Contradictory parameters
  • DB exists and you’ve not specified replace
  • You’ve specified a filename that’s already in use
  • Many, many, more

This means that for scripts that you want to be reliable for automation that you need to make sure you’ve got robust checking in place. Or at least a good logging system so you can go back and review the settings for each run. Common issues, and how to check for them are listed below

Missing folders, or files that already exist. PowerShell provides Test-Path – does what it says on the tin, it tests a path to see if it exists, so you can quickly check if a path exists before writing to it, it returns $True and $False as appropriate. So to make sure that your backup folder exists before writing to it:

if (Test-Path "c:\mybackups\" -eq $False){
    write-output "Missing folder"
}

This could be modified to create the missing path if required, or if you actually wanted the file not to be there you can redirect your process.

Getting the SQL Server name wrong or the server not being available. When you build your new SQL Server object like so:

$sqlsvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server("WrongServer")

you might think that PowerShell would warn you if the server didn’t exist, but it doesn’t. And you might not find out till you get the error message above. To check it as soon as you create it you need to try to use it:

$sqlsvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server("WrongServer")
try{
    $sqlsvr.databases | Out-Null
}
catch{
    Write-Host "SQL Server Not Available"
}

You’ll need to check the database you’re targeting for backup or restore exists, thought for different reasons. If you’re restoring and the database exists you need to set the Restore objects Replace property to $True:

$DBname="fdg"

try{
    $sqlsvr.Databases.item($DbName) | Get-Member -ErrorAction stop
}
catch{[System.Exception]
    <#     DB does not exist.      Choose to abort backup attempt.     Set a flag that can be checked in in the Finally     block      #>
    $DBexists = $false

}
finally{
    if ($DBexists -ne $False){
        <#         DB exists, choose whether to abort restore         or set .Replace = $True         #>
    }
}

A common problem when backing up or restoring is checking that the SQL Server Service account has the correct permissions to access the area. Remember that PowerShell is just submitting T-SQL to SQL Server to execute, so the account running the script may have permissions.

Checking the permissions takes a bit more code but is also possible. This assumes we have a validated SQL Server connection called $sqlsvr, and also that the account running the script has permissions to see the permissions on the requested object:

$serviceaccount = $sqlsvr.ServiceAccount

#Need to append \\ to the end of the domain so it works in -replace
$domain= $env:USERDOMAIN+"\\"

#Path to Folder we want to check
$path = "c:\folder1\folder2"

#Get Group memberships for Service Account
$groups = Get-ADPrincipalGroupMembership ($ServiceAccount -ireplace $domain,"")

#Build up an Array to hold all the group and account names
$accounts = @()
$accounts += $groups.samaccountname
$accounts += $ServiceAccount -ireplace $domain,""

#Set a check for whether we have write permissions
$write=0
$read=0

#Get the ACL on our specified path
$perms = get-acl $path

#For each ACL in array
foreach ($acl in $perms.access){
    #Then for each acccount/group
    foreach ($acc in $accounts){
        #If a right is assigned to a group or account
        if ("ads\"+$acc -ieq $acl.IdentityReference){
            #Check if it's a permission that gives us write access
            if (($acl.FileSystemRights -ilike "*FullControl*") -or ($acl.FileSystemRights -ilike "*Modify*")){
                if($acl.AccessControlType -ne "Allow"){
                    #Hit a Deny, overrides any other permission, so drop out of checking
                    $write = 0
                    break
                }
                #Increment our write check. Keep looping as we may not have hit a deny yet
                $write++

                #If we have write we have read
                $read++

            }elseif ($acl.FileSystemRights -contains "Read"){
                $read++
            }
        }
    }
}

if ($write -eq 0){
    write-output "SQL Server service account $serviceaccount doesn't have write access to $path"
    #Take action
}

if ($read -eq 0){
    write-output "SQL Server service account $serviceaccount doesn't have read access to $path"
    #Take action
}

If you’re accessing your path through a share, then you also need to check share permissions as they may override your FileSystem permissions (same assumptions as filesystem example above):

$serviceaccount = $sqlsvr.ServiceAccount

#Need to append \\ to the end of the domain so it works in -replace
$domain= $env:USERDOMAIN+"\\"

#Path to Folder we want to check
$path="\\Server1\Share3$\folder1"
#Split the path on \ to get the bits we want
$ShareArray = $path -Split [regex]::Escape('\')

#Get Group memberships for Service Account
$groups = Get-ADPrincipalGroupMembership ($ServiceAccount -ireplace $domain,"")

#Build up an Array to hold all the group and account names
$accounts = @()
$accounts += $groups.samaccountname
$accounts += $ServiceAccount -ireplace $domain,""

#Set a check for whether we have write permissions
$write=0
$read=0

$share=$sharearray[3].Trim()
$ShareSecurity = Get-WMIObject -Class Win32_LogicalShareSecuritySetting -Filter "name='$Share'"  -ComputerName $ShareArray[2]
$ShareSecurityDesc = $ShareSecurity.GetSecurityDescriptor().Descriptor
foreach($ace in $SD.DACL){
       foreach ($acc in $accounts){
            $acl = New-Object Security.AccessControl.FileSystemAccessRule($ace.Trustee.name, $ace.AccessMask, $ace.AceType)
        #If a right is assigned to a group or account
        if ($acc -ieq $acl.IdentityReference){
             #Check if it's a permission that gives us write access
            if (($acl.FileSystemRights -ilike "*FullControl*") -or ($acl.FileSystemRights -ilike "*Modify*")){
                if($acl.AccessControlType -ne "Allow"){
                    #Hit a Deny, overrides any other permission, so drop out of checking
                    $write = 0
                    break
                }
                #Increment our write check. Keep looping as we may not have hit a deny yet
                $write++
                 #If we have write we have read
                $read++

            }elseif ($acl.FileSystemRights -ilike "*Read*"){
                $read++
            }
        }
    }

}

Those examples cover a number of the most common errors I see when leaving backups and restores to run on their own. You can add more to cover cases as you discover them.

And at a minimum, ensure that everything is encased in try,catch, finally blocks (making sure to add -ErrorAction Stop to non terminating cmdlet. And the next most useful thing is to use the script() method to keep a record of what you’re asking SQL Server to do, a lot of the time you’ll look at it and realise what you’ve done wrong.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Tagged ,

Day 23 of 31 Days of SQL Server Backup and Restore using PowerShell: It’s all just T-SQL, really

Yesterday we compared the performance of running backups with T-SQL and with PowerShell. And the conclusion was that on small backups there’s small overhead of building up the

At the end of the day, we shouldn’t have been expecting much of a difference in performance. All PowerShell’s SQLPS module is doing is using SQL Management Objects (SMO) to create T-SQL which is then submitted to SQL Sever. So most of the small performance overhead is purely the work of creating that T-SQL and then connecting to our SQL Server instance to submit it for execution.

We can exploit this feature in 2 ways:

  • Rather than executing the T-SQL we can just get PowerShell to give us the scripts so we can run them. This might be something you want to do in case of an important restore if you wanted to confirm the actions before committing.
  • Fault finding. SQLPS doesn’t offer the most informative error messages (in fact it quite ofter returns the same one for any error), so by extracting the T-SQL you can quickly see if the logic was correct, or if you’ve forgotten to create part of your backup or restore.

Getting the SQL Script is very simple, it’s just a substitution at the point of restore or backup:

  • Switch $backup.SQLBackup for $Backup.Script
  • Switch $Restore.SQLRestore for $Restore.Script
  • add the -Script swith to Restore-SQLDatabase or Backup-SQLDatabse cmdlets

With a simple example, it’s easy to see what will come out:

Import-Module "SQLPS" -DisableNameChecking

$ServerName = "Server1\SQL2012"
$SQLSvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server($ServerName)

$Db = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Database
$Db = $SQLSvr.Databases.Item("psdb1")

$Backup = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Backup
$Backup.Action = [Microsoft.SQLServer.Management.SMO.BackupActionType]::Database
$backup.BackupSetDescription = "Full Backup of "+$Db.Name
$Backup.Database = $db.Name

$BackupName = "c:\psbackups\"+$Db.Name+"_"+[DateTime]::Now.ToString("yyyyMMdd_HHmmss")+".bak"
$DeviceType = [Microsoft.SqlServer.Management.Smo.DeviceType]::File
$BackupDevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($BackupName,$DeviceType)

$Backup.Devices.Add($BackupDevice)
$Backup.Script($SQLSvr)
$Backup.Devices.Remove($BackupDevice)

will produce:

BACKUP DATABASE [psdb1] TO  DISK = N'c:\psbackups\psdb1_20130924_181155.bak' WITH  DESCRIPTION = N'Full Backup of psdb1', NOFORMAT, NOINIT, NOSKIP, REWIND, NOUNLOAD,  STATS = 10

But if we go for a more complex restore it provides a lot more. As an example let’s modify our Restore-SQLDatabasePointInTime function from Day 20

function Restore-SQLDatabasePointInTimeScript
{param($BackupFiles, [string]$SQLServer, [datetime]$PointInTime=(Get-Date).AddDays(1))

$ScriptOutPut = @()
$restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore
$restore.ToPointInTime = get-date($PointInTime) -format "MMM dd, yyyy hh:mm tt"

$devicetype = [Microsoft.SqlServer.Management.Smo.DeviceType]::File
foreach ($backup in $BackupFiles ){
$restoredevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($backup.filepath,$devicetype)

$restore.devices.add($restoredevice) | Out-Null
$restore.FileNumber = $backup.position
$restore.Database = $backup.DatabaseName
$ScriptOutput += $restore.Script($RestoreServer)

if ($backup.LastLSN -eq $BackupFiles[-1].LastLSN){
$ScriptOutput += "Restore database "+$restore.Database.tostring()+" with recovery"
}
$restore.Devices.Remove($restoredevice) | Out-Null
}
return $scriptoutput
}

And then call the same sequence of functions for our point in time restore operation, except substituting our new function at the very end. Then we can see that we get a very handy amount of code built for us:

RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_3_18.bak' WITH  FILE = 1,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_.bcf' WITH  FILE = 4,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_5_19.trn' WITH  FILE = 1,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_5_20.trn' WITH  FILE = 1,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_5_21.trn' WITH  FILE = 1,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_5_22.trn' WITH  FILE = 1,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_5_23.trn' WITH  FILE = 1,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_6_0.trn' WITH  FILE = 1,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_6_1.trn' WITH  FILE = 1,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_6_2.trn' WITH  FILE = 1,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_6_3.trn' WITH  FILE = 1,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_6_4.trn' WITH  FILE = 1,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_6_5.trn' WITH  FILE = 1,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_6_6.trn' WITH  FILE = 1,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_6_7.trn' WITH  FILE = 1,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_6_8.trn' WITH  FILE = 1,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_6_9.trn' WITH  FILE = 1,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_6_10.trn' WITH  FILE = 1,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
Restore database PointInTime with recovery

It is fact fine to do the restore, and retrieve the script during the same operation, so we could modify our original function to include the .Script() method as well, or make it optional via a switch or variable. Collecting the scripts for every restore/backup and then storing it would mean that you’d have a head start when trying to debug issues, or as more information to provide for auditors.

As the PowerShell submissions are ultimately just T-SQL you can use all your normal tools to monitor or control them.

Tomorrow we’ll look at

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Tagged ,

Day 22 of 31 Days of SQL Server Backup and Restore using PowerShell: Performance comparisons

So far we’ve been looking at the cool stuff we can do with SQL Server Backups, Restores and PowerShell. And I don’t know about you, but it seems look like it’s going to make life a lot easier. But, is there a down side?

Well, as we’re DBA’s they first thing we usually worry about is performance. So let’s run some raw speed tests to see if there’s any unwanted overhead. For a backup, most of the overhead is going to be the speed of the underlying drives, so all these tests are run on the same rig. Which is SQL Server 2012 running on Windows 2012 in a VirtualBox virtual machine, PowerShell is 3.0.

We need a sizeable DB so we can get some decent timings. Nothing fancy here:

USE [master]
GO
CREATE DATABASE [SpeedTest]
 CONTAINMENT = NONE
 ON  PRIMARY
( NAME = N'SpeedTest', FILENAME = N'E:\speedtest\SpeedTest.mdf')
 LOG ON
( NAME = N'SpeedTest_log', FILENAME = N'E:\speedtest\SpeedTest_log.ldf')
GO
USE [SpeedTest]
GO
CREATE TABLE [dbo].[bigtable1](
	[dummyID] [int] IDENTITY(1,1) NOT NULL,
	[bigdatat] [varchar](8000) NULL
) ON [PRIMARY]

GO
SET ANSI_PADDING OFF
GO
/****** Object:  Table [dbo].[bigtable2]    Script Date: 24/09/2013 15:05:49 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[bigtable2](
	[dummyID] [int] IDENTITY(1,1) NOT NULL,
	[bigdatat] [varchar](8000) NULL
) ON [PRIMARY]

GO

CREATE TABLE [dbo].[bigtable3](
	[dummyID] [int] IDENTITY(1,1) NOT NULL,
	[bigdatat] [varchar](8000) NULL
) ON [PRIMARY]

GO

CREATE TABLE [dbo].[bigtable4](
	[dummyID] [int] IDENTITY(1,1) NOT NULL,
	[bigdatat] [varchar](8000) NULL
) ON [PRIMARY]

GO

And then bloat the tables with junk:

insert into Bigtable1 (bigdatat) values (replicate('junk',2000))
insert into Bigtable2 (bigdatat) values (replicate('junk',2000))
insert into Bigtable3 (bigdatat) values (replicate('junk',2000))
insert into Bigtable4 (bigdatat) values (replicate('junk',2000))
go 100000

Which gives us a database of 3GB. Tiny in the real world sense, but large enough to give us some usable stats.

To get the T-SQL timings we use the following script:

create table backuptime(
ident integer,
starttime datetime,
endtime datetime
)

declare @i integer  =1;

while @i<20
begin
    declare @starttime datetime, @endtime datetime;
    set @starttime = CURRENT_TIMESTAMP
    backup database Speedtest to disk='e:\sizetest\backup.bak' with init
    select @endtime = current_timestamp
    insert into backuptime values (@i,@starttime,@endtime)
    set @i = @i+1
end

We create a holding table, and then loop through 20 backups recording the start and finish times in our table.

And to get our PowerShell timings:

Import-Module SQLPS -DisableNameChecking

$Database = "SpeedTest"
$ServerName = "WIN-C0BP65U3D4G"

$SQLSvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server($ServerName)
$Db = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Database
$Db = $SQLSvr.Databases.Item($Database)
$BackupName = "e:\sizetest\speedtestps.bak"
$DeviceType = [Microsoft.SqlServer.Management.Smo.DeviceType]::File

$i=1

$times = @()

while ($i -lt 20){
    $StartTime = get-date
    $Backup = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Backup
    $Backup.Initialize = $True

    $Backup.Action = [Microsoft.SQLServer.Management.SMO.BackupActionType]::Database
    $Backup.Database = $db.Name

    $BackupDevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($BackupName,$DeviceType)

    $Backup.Devices.Add($BackupDevice)
    $Backup.SqlBackup($SQLSvr)
    $Backup.Devices.Remove($BackupDevice)

    remove-variable backup
    $times += new-timespan $StartTime $(get-date)
    $i++
}

We create the connection to SQL Server outside of our loop, as this would be the most costly part of the script but is only realy needed once for each connection to the server. The backup runs the same Full backup as our T-SQL version. We use the New-TimeSpan cmdlet to calculate the interval between start and finish, then store that in an array which we can later analyse through Measure-Object to get maximum, minimum and average.

So after running each procedure 3 times to get a consistent average what did we find?

From the T-SQL runs I got an average of 48.18s, with a longest of 55.59s, shortest of 45.94s and a standard deviation of 1.42s (Full result data here)

And from the PowerShell runs I got an average of 58.56s, with a longest of 72.91s, shortest of 56.48s and a standard deviation of 2.1s (Full result data here)

So PowerShell appears slower here. It’s certainly more variable (greater spread between max and min, and a larger Standard deviation means in general the times are across a general range ignoring the extremes).

Not looking great at the moment. So lets try doubling the size of the database and see if the differences stay consistent. Just rerun the insert block again. This time I’m just going to run the timings once each

Now the T-SQL runs give me an average of 111.8s, max of 118.59s, minimum of 107.87s and a standard deviation of 2.16s (Full result data here)

And PowerShell gives me an average of 110.48s, max of 121.1s, minimum of 107s and a standard deviation of 2.78s (Full result data here)

So now it shows that they’re pretty much even. Which shows that the PowerShell overhead is in creating the object and then sending it to a remote server

Hopefully that’s put some minds at rest about any potential performance issues using these PowerShell backup and restore methods over the more convential SQL Server methods.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

T-SQL Backups time results

starttimeendtimeTime difference (ms)
24/09/2013 15:24:2324/09/2013 15:25:1955593
24/09/2013 15:25:1924/09/2013 15:26:0747823
24/09/2013 15:26:0724/09/2013 15:26:5447363
24/09/2013 15:26:5424/09/2013 15:27:4248213
24/09/2013 15:27:4224/09/2013 15:28:3047576
24/09/2013 15:28:3024/09/2013 15:29:1746646
24/09/2013 15:29:1724/09/2013 15:30:0346326
24/09/2013 15:30:0324/09/2013 15:30:5148476
24/09/2013 15:30:5124/09/2013 15:31:3947656
24/09/2013 15:31:3924/09/2013 15:32:2646546
24/09/2013 15:32:2624/09/2013 15:33:1347756
24/09/2013 15:33:1324/09/2013 15:34:0147716
24/09/2013 15:34:0124/09/2013 15:34:4745940
24/09/2013 15:34:4724/09/2013 15:35:3547993
24/09/2013 15:35:3524/09/2013 15:36:2449250
24/09/2013 15:36:2424/09/2013 15:37:1450020
24/09/2013 15:37:1424/09/2013 15:38:0348333
24/09/2013 15:38:0324/09/2013 15:38:5148286
24/09/2013 15:38:5124/09/2013 15:39:3948086
24/09/2013 16:32:1924/09/2013 16:33:0647543
24/09/2013 16:33:0624/09/2013 16:33:5448000
24/09/2013 16:33:5424/09/2013 16:34:4147250
24/09/2013 16:34:4124/09/2013 16:35:3048350
24/09/2013 16:35:3024/09/2013 16:36:1747543
24/09/2013 16:36:1724/09/2013 16:37:0547426
24/09/2013 16:37:0524/09/2013 16:37:5247296
24/09/2013 16:37:5224/09/2013 16:38:4048030
24/09/2013 16:38:4024/09/2013 16:39:2949056
24/09/2013 16:39:2924/09/2013 16:40:1546010
24/09/2013 16:40:1524/09/2013 16:41:0246876
24/09/2013 16:41:0224/09/2013 16:41:4947360
24/09/2013 16:41:4924/09/2013 16:42:3747286
24/09/2013 16:42:3724/09/2013 16:43:2649096
24/09/2013 16:43:2624/09/2013 16:44:1347810
24/09/2013 16:44:1324/09/2013 16:45:0147183
24/09/2013 16:45:0124/09/2013 16:45:4847423
24/09/2013 16:45:4824/09/2013 16:46:3647953
24/09/2013 16:46:3624/09/2013 16:47:2448056
25/09/2013 07:32:5425/09/2013 07:33:4247663
25/09/2013 07:33:4225/09/2013 07:34:3047906
25/09/2013 07:34:3025/09/2013 07:35:1949300
25/09/2013 07:35:1925/09/2013 07:36:0848623
25/09/2013 07:36:0825/09/2013 07:36:5547323
25/09/2013 07:36:5525/09/2013 07:37:4449240
25/09/2013 07:37:4425/09/2013 07:38:3449756
25/09/2013 07:38:3425/09/2013 07:39:2348940
25/09/2013 07:39:2325/09/2013 07:40:1248746
25/09/2013 07:40:1225/09/2013 07:40:5947420
25/09/2013 07:40:5925/09/2013 07:41:4748020
25/09/2013 07:41:4725/09/2013 07:42:3547773
25/09/2013 07:42:3525/09/2013 07:43:2550163
25/09/2013 07:43:2525/09/2013 07:44:1651130
25/09/2013 07:44:1625/09/2013 07:45:0548266
25/09/2013 07:45:0525/09/2013 07:45:5348846
25/09/2013 07:45:5325/09/2013 07:46:4249130
25/09/2013 07:46:4225/09/2013 07:47:3249333
25/09/2013 07:47:3225/09/2013 07:48:1947490

PowerShell Backup Time results

Time difference (s)
57.30
56.48
59.92
57.85
57.72
57.99
58.43
57.15
57.75
57.57
58.08
57.57
57.92
58.24
58.85
59.68
57.91
58.89
56.84
58.23
59.27
58.78
58.26
58.18
57.62
59.20
58.96
57.80
57.94
57.56
58.80
57.45
56.66
59.02
59.24
58.72
57.96
58.11
72.91
59.89
58.91
58.84
58.52
59.15
59.22
58.59
57.33
58.93
58.23
58.58
58.37
58.10
60.17
58.09
58.15
56.82
59.31

T-SQL Results, 6GB database

StartTimeEndTimeDifference(ms)
2013-09-25 09:42:18.5632013-09-25 09:44:17.153118590
2013-09-25 09:44:17.1602013-09-25 09:46:08.480111320
2013-09-25 09:46:08.4802013-09-25 09:47:58.923110443
2013-09-25 09:47:58.9232013-09-25 09:49:46.793107870
2013-09-25 09:49:46.7932013-09-25 09:51:38.323111530
2013-09-25 09:51:38.3232013-09-25 09:53:28.560110236
2013-09-25 09:53:28.5602013-09-25 09:55:21.393112833
2013-09-25 09:55:21.3932013-09-25 09:57:11.690110296
2013-09-25 09:57:11.6902013-09-25 09:59:03.420111730
2013-09-25 09:59:03.4202013-09-25 10:00:56.600113180
2013-09-25 10:00:56.6002013-09-25 10:02:48.073111473
2013-09-25 10:02:48.0732013-09-25 10:04:38.913110840
2013-09-25 10:04:38.9132013-09-25 10:06:30.870111956
2013-09-25 10:06:30.8702013-09-25 10:08:23.490112620
2013-09-25 10:08:23.4902013-09-25 10:10:16.580113090
2013-09-25 10:10:16.5802013-09-25 10:12:08.380111800
2013-09-25 10:12:08.3802013-09-25 10:14:02.007113626
2013-09-25 10:14:02.0072013-09-25 10:15:51.490109483
2013-09-25 10:15:51.4902013-09-25 10:17:42.697111206

PowerShell results 6GB database

Time (s)
121.08
111.15
110.59
110.47
109.39
109.23
110.79
109.87
110.14
109.96
110.59
107.47
109.18
110.67
109.81
110.96
110.46
110.38
106.99
Tagged ,

Day 21 of 31 Days of SQL Server Backup and Restore using PowerShell: Verfiying a restored database

So far we’ve just restored out databases, and assumed that the lack of an error from PowerShell means that all is good. Well, it can be, but it’s not guaranteed.

Paul S Randall provides a lot of examples of corrupted databases which will restore absolutely fine – http://www.sqlskills.com/blogs/paul/corruption-demo-databases-and-scripts/ – so just relying on a lack of error doesn’t mean your database is ready for use. If you want a corrupted database to run through these examples and are lucky enough to not have one to hand, these are ready made for you.

Also, a restored copy of your production database is the perfect place to do heavy DBCC checks. Getting a window to run them on production may be a nightmare, but you won’t have to negotiate if it’s a restored copy that noone else is using.

Today we’re just going to look at using SQL Server’s built in Database Consistency Check (DBCC), but there is nothing to stop you from rolling your own validation scripts which are aware of your business requirements as well as SQL Server’s internal needs.

To do this we’ll be using Invoke-SQLcmd, which lets us run SQL queries. Invoke-SQLcmd returns a PowerShell DataTable so we can take out quite complex results for later processing or saving.

At it’s simplest we can simply run DBCC against the database of our choice:

Import-Module sqlps -DisableNameChecking
$results = Invoke-SQLCmd -ServerInstance "Server1" -Query "DBCC CHECKDB (database1) WITH TABLERESULTS"

Note that we use WITH TABLERESULTS, this ensures the output from DBCC goes smoothly into our $results table. As it stands this will return 80 rows whether the Database has issues or not as DBCC will return all of the informational messages as well.

If you want to keep the informational messages but don’t want act upon them, then you can filter them by averaging the Status column:

if (($results | Measure-Object -property status -Average).Average -eq 0){
    # We have only information messages, do nothing
}else{
    #We have error records, do something.
}

A status of 0 indicates no problem, so an average of 0 means that every value is 0.

You can also suppress the informational messages at the DBCC level by appending NO_INFOMSGS to your query:

Import-Module sqlps -DisableNameChecking
$results = Invoke-SQLCmd -ServerInstance "Server1" -Query "DBCC CHECKDB (database1) WITH TABLERESULTS, NO_INFOMSGS"

Which means you’ll only get back rows for errors, in this case, you can simply check the size/length of your results:

if ($results.length -eq 0){
    # We have only information messages, do nothing
}else{
    #We have error records, do something.
}

There is also a 3rd option of return value. It is possible for the database to be so corrupted that DBCC does not complete, or even begin in some cases! Because of this, we need to wrap the Invoke-SQLCmd in a Try, Catch block so we catch this as well:

try{
    $tmp2 = Invoke-Sqlcmd -Query "DBCC CHECKDB (fatal) with TABLERESULTS, NO_INFOMSGS" -ServerInstance "Server1" -ErrorAction Stop
}
catch{ [System.Exception]
    $errorout += "major error, big problem"
    $errorout += $error[0]
}

Note that we need add -ErrorAction Stop to our call to Invoke-SQLCmd, this is because in PowerShell terminology Invoke-SQLCmd is Non-Terminating, that is it won’t throw an error that stops execution or bet caught by Try,Catch. By adding -ErrorAction Stop we tell PowerShell that we want any errors raised to be Terminators, which means we can catch them. It is also possible to set this at a script level, but that can cause some unexpected things to bubble up, so only do so if you know what to expect.

There are a number of ways you can handle the errors. At the simplest level you can notify yourself or other operators that an error has been found, and then investigate manually. PowerShell provides a cmdlet Send-MailMessage which you can use to pass the message on:

Push-Location
import-module sqlps -DisableNameChecking
Pop-Location

$results = @()
$Database = "fatal"
$SQLServer = "Server1"
$errorout = ""

try{
    $results = Invoke-Sqlcmd -Query "DBCC CHECKDB ($Database) with TABLERESULTS, NO_INFOMSGS" -ServerInstance $SQLServer -ErrorAction Stop
}
catch{ [System.Exception]
    $errorout = $_
}
if (($results.length -eq 0) -and ($errorout -ne "")){
    Send-MailMessage -SmtpServer smtphost.contoso.com -from sql@ncontoso.com -to stuart.moore@contoso.com  -body $errorout -Subject "Fatal DBCC error for $Database"
}else{
   $results | export-clixml c:\temp\dbcc.xml
   Send-MailMessage -SmtpServer smtphost.contoso.com -from sql@ncontoso.com -to stuart.moore@contoso.com -body "See Attachment for output" -Subject "DBCC errors for $Database" -Attachments c:\temp\dbcc.xml
   Remove-Item c:\temp\dbcc.xm
}

In this case I’m dropping the output into XML before sending it as an attachment, but PowerShell provided cmdlets for converting to text, CSV and other formats as well.

The results can also be written back to a database for audit and reporting purposes. You can also write the information from multiple restore servers back to a central repository. Unfortunately this has become slightly more complicated since the release of SQL Server 2012 as Microsoft have changed the columns that various DBCC commands retun. There are a number of options to cope with this:

  • Different tables for SQL Server 2012, and pre 2012 SQL Server results
  • Storing the data as XML in the database
  • Only storing certain information (ie; ignore the bits that don’t fit)

Each of these presents benefits and drawbacks. Here’s a quick example of how to log it to a table. This based on have a table like this (this is the pre SQL Server 2012 version):

CREATE TABLE pre2012DBCCResults
(
[ServerName] VARCHAR(200) NULL,
[DatabaseName] VARCHAR(200) NULL,
[DateOfRun] DATETIME NULL,
[Error] INT NULL,
[Level] INT NULL,
[State] INT NULL,
[MessageText] VARCHAR(7000) NULL,
[RepairLevel] varchar(200) NULL,
[Status] INT NULL,
[DbId] INT NULL,
[ObjectId] INT NULL,
[IndexId] INT NULL,
[PartitionId] INT NULL,
[AllocUnitId] INT NULL,
[File] INT NULL,
[Page] INT NULL,
[Slot] INT NULL,
[RefFile] INT NULL,
[RefPage] INT NULL,
[RefSlot] INT NULL,
[Allocation] INT NULL,
)

And then to populate that table we add the following to our Verification script:

Push-Location
import-module sqlps -DisableNameChecking
Pop-Location

$results=@()
$Database = "semifatal"
$SQLServer = "ntuclstr-qc\quality"
$errorout = ""

try{
    $results = Invoke-Sqlcmd -Query "DBCC CHECKDB ($Database) with TABLERESULTS, NO_INFOMSGS" -ServerInstance $SQLServer -ErrorAction Stop
}
catch{ [System.Exception]
    $errorout = $_
}
if (($results.length -eq 0) -and ($errorout -ne "")){
    Send-MailMessage -SmtpServer smtphost.contoso.com -from sql@ncontoso.com -to stuart.moore@contoso.com  -body $errorout -Subject "Fatal DBCC error for $Database"
}else{
    $results | Add-Member -MemberType NoteProperty -name ServerName -value $SQLServer
    $results | Add-Member -MemberType NoteProperty -name DatabaseName -value $Database
    $results | Add-Member -MemberType NoteProperty -name DateOfRun -value (get-date) -TypeName datetime

    $ResultsServer = "Server2"
    $ResultsDatabase = "RestoreChecks"
    $ResultsTable = "pre2012DBCCResults"
    $RestoreSQL = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server($ResultsServer)
    if ($RestoreSQL.VersionMajor -lt 12){
        foreach ($row in $results){
        $row.MessageText = $row.MessageText -replace ("'","''")
        $str = "insert into "+$ResultsTable"+ ([ServerName], [DatabaseName],[DateOfRun],[Error],[Level],[State],[MessageText],[RepairLevel],[Status],[DbId],[ObjectId],[IndexId],[PartitionId],[AllocUnitId],[File],[Page],[Slot],[RefFile],[RefPage],[RefSlot],[Allocation]) values ('"+$row.ServerName+"','"+$row.DatabaseName+"','"+$row.DateOfRun+"','"+$row.Error+"','"+$row.Level+"','"+$row.State+"','"+$row.MessageText+"','"+$row.RepairLevel+"','"+$row.Status+"','"+$row.DbId+"','"+$row.Id+"','"+$row.IndId+"','"+$row.PartitionId+"','"+$row.AllocUnitId+"','"+$row.File+"','"+$row.Page+"','"+$row.Slot+"','"+$row.RefFile+"','"+$row.RefPage+"','"+$row.RefSlot+"','"+$row.Allocation+"')"
        Invoke-Sqlcmd -query $str -ServerInstance $ResultsServer -Database $ResultsDatabase
        }
        Send-MailMessage -SmtpServer smtphost.contoso.com -from sql@ncontoso.com -to stuart.moore@contoso.com -body "Errors records in Table: $Resultstable in $ResultsDatabase on $ResultsServer" -Subject "DBCC errors for $Database" -
    }else{
       #Work with the SQL Server 2012 code
    }

}

Again, in this example we’re only storing the results if there’s an error. But it would be simple enough to log all results if you wanted to by removing NO_INFOMSGS and moving the db write outside of the IF section.

If you wanted to run your own custom scripts, then you can just run them with Invoke-SQLCmd and handle the results however you want.

Tomorrow we’ll take a look at the performance differences between running your backups and restores via PowerShell or via T-SQL

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Tagged ,

Day 20 of 31 Days of SQL Server Backup and Restore using PowerShell: Restore to Random Point in Time

Yesterday we looked at restoring our SQL Server database to a point in time, but we had to tell PowerShell and SQL Server when that point should be. As the overall idea of this series is to show you how you can automate your restores, having to manually enter a point in time isn’t really going to cut it. So today we’ll look at how you could pick a point in time from the period covered by your backups. We’ll also look at cutting some of yesterdays code into reusable functions (Remember the PowerShell mantra “Write functions, not scripts”) that we can add to our existing functions.

First we’ll create a funtion to return the subset of backup files we want. To save duplication, we’ll make it work for both point in time restores, and for returning the files needed to restore to the most recent state. We’ll pass in our collection of backup files, a SQL Server name (this being the server name that the backup was performed against), a database name, and an optional point in time.

function Get-BackupsByDate
{param($BackupFiles, [string]$SQLServer, [string]$DatabaseName, [datetime]$PointInTime=(Get-Date).AddDays(1))

    $filter = get-date($PointInTime2) -format "dd MMMM yyyy HH:mm:ss"
    $RestoreFiles = @()
    $RestoreFiles += $BackupFiles | Where-Object {($_.BackupTypeDescription -eq "Database") -and ($_.BackupStartDate -lt $filter) -and ($_.DatabaseName -eq $DatabaseName) -and ($_.ServerName -eq $ServerName)} | Sort-Object LastLSN -Descending | Select-Object -First 1
    $RestoreFiles += $BackupFiles | Where-Object {($_.BackupTypeDescription -eq "Database Differential") -and ($_.BackupStartDate -lt $filter) -and ($_.DatabaseName -eq $DatabaseName) -and ($_.ServerName -eq $ServerName)} | Sort-Object LastLSN -Descending | Select-Object -First 1
    $tmpLSN = $RestoreFiles | Measure-Object -Property LastLSN -Maximum
    $RestoreFiles += $BackupFiles | Where-Object {($_.LastLSN -ge $tmpLSN.Maximum) -and ($_.BackupTypeDescription -eq "Transaction Log") -and ($_.BackupStartDate -lt $filter) -and ($_.DatabaseName -eq $DatabaseName) -and ($_.ServerName -eq $ServerName)}
    $RestoreFiles = $RestoreFiles | Sort-Object -Property LastLSN
    return $RestoreFiles
}

If we don’t get a point on time, we assume that the user wants to restore the to the most current state, so we set a time 1 day in the future and pick the files in the normal method. We also sort the array within the function.

We can then also convert the restore portion of our script into a function that takes the array of files to restore, and a SQLServer to restore them on to:

function Restore-SQLDatabasePointInTime
{param($BackupFiles, [string]$SQLServer, [datetime]$PointInTime=(Get-Date).AddDays(1))

    $restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore
    $restore.ToPointInTime = get-date($PointInTime) -format "MMM dd, yyyy hh:mm tt"

    $devicetype = [Microsoft.SqlServer.Management.Smo.DeviceType]::File
    foreach ($backup in $BackupFiles ){
        $restoredevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($backup.filepath,$devicetype)

        $restore.devices.add($restoredevice)
        $restore.FileNumber = $backup.position
        $restore.Database = $backup.DatabaseName
        $restore.SqlRestore($RestoreServer)

        if ($backup.LastLSN -eq $BackupFiles[-1].LastLSN){
            $query = "Restore database "+$restore.Database.tostring()+" with recovery"
            Invoke-Sqlcmd -ServerInstance $RestoreServer -query $query
        }
        $restore.Devices.Remove($restoredevice)
    }

}

We use the same method to cope with a missing $PointInTime parameter. This will work fine if you pass in a filtered list of backups, it will just restore to the END of the last transaction log rather than stopping at any point in time.

So now we can cut down yesterday’s script to:

$backupfiles = @()
$backups = @()
$restoreFiles = @()
$BackupServer = "Server1"
$RestoreServer = "RestoreServer1"
$DatabaseName = "PointInTime"

$backupfiles += Get-SQLBackupFiles -FilePath c:\piit\ -SQLServer "RestoreServer"

foreach($file in $BackupFiles | Sort-Object -property LastLSN){
    $backups +=Get-BackupContents -FilePath $file -SQLServer "RestoreServer"
}

$PointInTime = get-date("16:59 20/09/2013")

$RestoreFiles = Get-BackupsByDate -BackupFiles $Backups -SQLServer $BackupServer -DatabaseName $DatabaseName -PointInTime $PointInTime

Restore-SQLDatabasePointInTime -BackupFiles $RestoreFiles -SQLServer $RestoreServer -PointInTime $PointInTime

Which is a lot cleaner. But we still have a manually entered Point In time, so how do we get rid of that?

Well, we’ll add another function that filters the backups down to all the backups for a specific database and Servername (just in case you’ve got multiple copies of ‘Accounting’ on different servers). Then we can find the maximum and minimum dates covered by those backups, pick a random number in the middle and convert it to a suitable date:

function Get-DatabasePointInTime
{param($Backupfiles, [string]$SQLServer, [string]$DatabaseName)

    $BackupFiles = $Backupfiles | Where-Object {($_.DatabaseName -eq $DatabaseName) -and ($_.ServerName -eq $ServerName)}
    $MinDate = ($Backupfiles | Measure-Object -Property BackupStartDate -Minimum).Minimum
    $MaxDate = ($Backupfiles | Measure-Object -Property BackupStartDate -Maximum).Maximum
    $seconds =  Get-Random ($MaxDate - $MinDate).totalseconds
    $PointInTime = $MinDate.AddSeconds($seconds)
    return $PointInTime
}

which reduces our script to:

$backupfiles = @()
$backups = @()
$restoreFiles = @()
$BackupServer = "Server1"
$RestoreServer = "RestoreServer1"
$DatabaseName = "PointInTime"

$backupfiles += Get-SQLBackupFiles -FilePath c:\piit\ -SQLServer $RestoreServer

foreach($file in $BackupFiles | Sort-Object -property LastLSN){
    $backups +=Get-BackupContents -FilePath $file -SQLServer $RestoreServer
}

$PointInTime = Get-DatabasePointInTime -BackupFiles $backups -SQLServer $RestoreServer -DatabaseName $DatabaseName

$RestoreFiles = Get-BackupsByDate -BackupFiles $Backups -SQLServer $BackupServer -DatabaseName $DatabaseName -PointInTime $PointInTime

Restore-SQLDatabasePointInTime -BackupFiles $RestoreFiles -SQLServer $RestoreServer -PointInTime $PointInTime

By changing the start of the script we can allow the script to take in parameters:


if ($args.Length -lt 3){
     $x =  @"
"
     Wrong usage. Should be : .\PITRestore.ps1 `"`" `"`" `"`"
      - Name of the SQL Server doing the restore
      - Name of the SQL Server where the backup was performed
      - Database to Restore
      - Path to backup files
"@
     write-host $x
}
$RestoreServer = $args[0]
$BackupServer = $args[1]
$DatabaseName = $args[2]
$FilePath = $args[3]

$backupfiles = @()
$backups = @()
$restoreFiles = @()

$backupfiles += Get-SQLBackupFiles -FilePath $filepath -SQLServer $RestoreServer

foreach($file in $BackupFiles | Sort-Object -property LastLSN){
    $backups +=Get-BackupContents -FilePath $file -SQLServer $RestorServer
}

$PointInTime = Get-DatabasePointInTime -BackupFiles $backups -SQLServer $RestoreServer -DatabaseName $DatabaseName

$RestoreFiles = Get-BackupsByDate -BackupFiles $Backups -SQLServer $BackupServer -DatabaseName $DatabaseName -PointInTime $PointInTime

Restore-SQLDatabasePointInTime -BackupFiles $RestoreFiles -SQLServer $RestoreServer -PointInTime $PointInTime

And we can now call that as:

.\PointInTimeRestore.ps1 "Server1" "PointInTime" "c:\pitt\"

Which is simple to schedule from Tack Scheduler (See Day 10).

So you can now easily schedule a restore of a database to a random point in time. This leads on to the next question of how can we verify that that backup is sound? We’ll look at how we can automate verification of the new database with PowerShell tomorrow.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Tagged ,

Day 19 of 31 Days of SQL Server Backup and Restore using PowerShell: Point In Time Restores

Over the last couple of posts we’ve covered increasingly complex restore scenarios, but we’ve always restored to the most recent point we can. Today we’re going to investigate restoring to a point somewhere else in the backup chain.

To do this, we may have to find:

  • The most recent Full database backup taken prior to the point we want to restore to
  • The most recent differential backup prior to the restore point
  • All of the transaction log backups from the differential up to and including the one covering the point we want to restore to

It is possible that we may only need 1 or more of those. It may be that our Point in Time (PiT) requirement can be purely covered by our full backup, or we may not need the differential as we can reach our PiT with just the full backup and transaction logs.

To give ourselves enough data to work with, we’ll create a simple database:

create database PointInTime;
go

use PointInTime;
go

create table step
(
StepID int identity(1,1),
StepTime datetime
)
go

And then we’ll simulate 7 “days” worth of database usage:

import-module "SQLPS" -DisableNameChecking

$Server = "WIN-C0BP65U3D4G"

$minutes = 0
$hours = 0
$days = 0

Backup-SqlDatabase -ServerInstance $Server -Database PointInTime -BackupAction Database -BackupFile "c:\piit\chain-start.bak"

While ($days -lt 8){
    $hours = 0
    while ($hours -lt 24){
        $minutes = 0
        while ($minutes -lt 60){
            Invoke-Sqlcmd -ServerInstance $Server -query "insert into PointInTime.dbo.Step (steptime) values (current_timestamp)"
            Start-Sleep -Milliseconds 500
            $minutes++
        }
        if ($hours -eq 18){
            if ($days%3 -eq 0){
                #full backup
                $backupfile = "c:\piit\PointInTime_"+$days+"_"+$hours+".bak"
                Backup-SqlDatabase -ServerInstance $Server -Database PointInTime -BackupAction Database -BackupFile $backupfile
            }else{
                #diff backup
                $backupfile = "c:\piit\PointInTime_.bcf"
                Backup-SqlDatabase -ServerInstance $Server -Database PointInTime -BackupAction Database -BackupFile $backupfile -Incremental
            }
        }else{
            #log backup
             $backupfile = "c:\piit\PointInTime_"+$days+"_"+$hours+".trn"
             Backup-SqlDatabase -ServerInstance $Server -Database PointInTime -BackupAction Log -BackupFile $backupfile
        }
        $hours++
    }
    $days++
}

Here we’re simulating 1 “week” of the life in a database, with the following occasions:

  • Every ‘minute’ we insert a row into the steps table
  • Every ‘hour’ we take a transaction log backup
  • At ’18:00′ on every 3rd ‘day’ we take a full backup
  • At ’18:00′ on days where we don’t take a full backup we take a differential backup

The script should take about 90 minutes to complete, and give you 162 backup files. And just to be awkward, and so we can reprove our previous scripts, I’ve put all the differential backups into a single file.

My version of this database runs from 16:13 20/09/2013 till 17:36, so I need to pick a PiT between those boundaries. Again we’re going to assume we’ve saved our file reading functions into a script file, so we can just . include them here:

$backupfiles = @()
$backups = @()

$backupfiles += Get-SQLBackupFiles -FilePath c:\piit\ -SQLServer "RestoreServer"

foreach($file in $BackupFiles | Sort-Object -property LastLSN){
    $backups +=Get-BackupContents -FilePath $file -SQLServer "RestoreServer"
}

$PointInTime = get-date("16:59 20/09/2013")
$filter = get-date($PointInTime) -format "dd MMMM yyyy HH:mm:ss"
$PointInTimeSQL = get-date($PointInTime) -format "MMM dd, yyyy hh:mm tt"
$RestoreFiles = @()

$RestoreFiles += $Backups | Where-Object {($_.BackupTypeDescription -eq "Database") -and ($_.BackupStartDate -lt $filter)} | Sort-Object LastLSN -Descending | Select-Object -First 1
$RestoreFiles += $Backups | Where-Object {($_.BackupTypeDescription -eq "Database Differential") -and ($_.BackupStartDate -lt $filter)} | Sort-Object LastLSN -Descending | Select-Object -First 1
$tmpLSN = $RestoreFiles | Measure-Object -Property LastLSN -Maximum
$RestoreFiles += $Backups | Where-Object {($_.LastLSN -ge $tmpLSN.Maximum) -and ($_.BackupTypeDescription -eq "Transaction Log") -and ($_.BackupStartDate -lt $filter)}

$RestoreServer = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server("RestoreServer")
$devicetype = [Microsoft.SqlServer.Management.Smo.DeviceType]::File

$RestoreFiles = $RestoreFiles | sort-object -property LastLSN

foreach ($backup in $RestoreFiles ){
    $restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore
    $restoredevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($backup.filepath,$devicetype)
    $restore.devices.add($restoredevice)
    $restore.FileNumber = $backup.position
    $restore.ToPointInTime = $PointInTimeSQL
    $restore.Database = $backup.DatabaseName
         $restore.SqlRestore($RestoreServer)

    $backup | fl filepath, lastlsn, backuptypedescription, backupstartdate
    if ($backup.LastLSN -eq $RestoreFiles[-1].LastLSN){
        $query = "Restore database "+$restore.Database.tostring()+" with recovery"
        Invoke-Sqlcmd -ServerInstance $RestoreServer -query $query
    }
    $restore.Devices.Remove($restoredevice)
}

The first few lines are the familiar ones we’ve been using over the last couple of posts.

We then set some DateTime variables. We need a couple of different formats to match the formats in use later in the script, and I find it easier to keep the formatting seperate to make the later lines more readable.

We then build up a $RestoreFiles array like before. But this time the filtering is different. Now we don’t want the latest Full Backup, we want the latest full backup previous to our Point in Time. And then we want the newest differential in the gap between our Full backup and our Point in Time. Finally, we find all the transaction log backups between our differential and out Point in Time, remembering that we also need the log back that goes past the Point in Time! It can be easy to get the filtering wrong and stop at the log backup before the correct last one.

We sort $RestoreFiles to by LastLSN to ensure that we get the backup files in the correct order. We then loop through every backup in $RestoreFiles, building up a Restore object and restoring it. After each restore we compare the current backup’s LastLSN value with that of the final element in $RestoreFles (the index position -1 in an array will return the last object in the array), if they are the same then we have restored our last file, and now need to recover our database. We do this by using Invoke-SQLCmd to run a RESTORE DATABSE [dbname] WITH RECOVERY.

And if you now run select max(steptime) from PointInTime.dbo.Step you’ll find that you have a recovered database and that the returned value should be within 30 seconds of your chosen point in time.

Tomorrow we’ll look at cutting this up into reusable functions, and also adding some code to enable  our scripts to determine a random point in time covered by our backups, and restore to then for testing.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Tagged ,

Day 18 of 31 Days of SQL Server Backup and Restore using PowerShell: Relocating files during Restore

So far during the restores portion of this series we’ve been assuming that we can restore our database’s file back to the same locations they were on the original server. Often this is not the case, even on a dedicated restore server:

  • there’s the possibility that the files already exist
  • a good chance that the restore server’s filesystems are laid out differently
  • you may want to be able to restore the same database to multiple points in time
  • or many other reasons

So today we’re going to look at how your can quickly relocate every file of a restored database without knowing anything about them prior to the restore.

To do this we’re going to use another method of the SMO Restore object, ReadFileList. This works in the same manner as the T-SQL RESTORE FILELISTONLY. When run against a backup file it will list all the database files contained within it. Under PowerShell this will return a DataTable object which we can loop through to identify every file, We can use this information, and details we want to provide to build a RelocateFile object, which we can then pass into the RelocateFiles method of Restore

For all the examples in todays post we’ll assume that the following PowerShell has been run to build the basic Restore environment:

Import-Module sqlps -DisableNameChecking
."c:\PSScripts\SQLBackupFunctions.ps1"

$backupfiles = @()
$backups = @()

$backupfiles += Get-SQLBackupFiles -FilePath c:\DiffRestore -SQLServer "Server1"

foreach($file in $BackupFiles){
    $backups +=Get-BackupContents -FilePath $file -SQLServer "Server1"
}

$RestoreFiles += $Backups | Where-Object {$_.BackupTypeDescription -eq "Database"} | Sort-Object LastLSN | Select-Object -First 1
$RestoreFiles += $Backups | Where-Object {$_.BackupTypeDescription -eq "Database Differential"} | Sort-Object LastLSN | Select-Object -First 1
$tmpLSN = $RestoreFiles | Measure-Object -Property LastLSN -Maximum
$RestoreFiles += $Backups | Where-Object {($_.LastLSN -ge $tmpLSNMaximum) -and ($_.BackupTypeDescription -eq "Transaction Log")}

 $restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore
$devicetype = [Microsoft.SqlServer.Management.Smo.DeviceType]::File

$HasThereBeenAnError = 0

As a basic example we’ll relocate all the database files in a backup file to c:\restorefiles\:

$restoredevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($RestoreFiles[0].FilePath,$devicetype)
$DBFiles = $restore.ReadFileList($SQLsvr)
if (($DBFiles | Measure-Object -Property IsPresent -average).Average -eq 1){
    foreach($File in $DBFiles){
	    $rf = new-object -typename Microsoft.SqlServer.Management.Smo.RelocateFile
	    $rf.LogicalFileName = $file.LogicalName
	    $rf.PhysicalFileName = $Path+(Split-Path $File.PhysicalName -Leaf)
	    $Restore.RelocateFiles.Add($rf)
	    Remove-Variable rf
    }
}else{
    Write-Output "Does Not contain all db files"
    break;
}

$Restore.sqlrestore($SQLSvr)

}

We use ReadFileList to read the contents of our backup file into $DBFiles . We then use this line to make sure the backup contains all the db files:

if (($RestoreFiles | Measure-Object -Property IsPresent -average).Average -eq 1){

This makes use of the fact that PowerShell treats $True as 1 and $False as 0 when you run a mathematical function across them. If all the files are present then the average should be 1, if any files are not present (IsPresent=False) then the average will be less than 1. This will happen if you pick a file that contains a File or FileGroup Backup. We’ll cover performing File and FileGroup restores in a later article, but will pretend they don’t exist for this one!

Assuming we’ve got a backup with all the files, we then loop through the data table. For each file we create a new RelocateFile object. We populate the LogicalFileName property directly and then build up a new path for the PhysicalFileName. We use the Split-Path cmdlet with it’s -leaf switch to split the filename and extension from the rest of the path (ie; Split-Path "c:\test\file.txt" -path would return file.txt). We then add the RelocateFile object to the RelocateFiles method of our Restore object.

The ReadFileList method can also be used check the size of the resulting restore. When SQL Server backs up a database it only writes out the data pages ignoring empty space in the data files. So a database that has 250GB allocated on disk, but only contain 120GB of data will produce a smaller backup than another database which 250GB allocated on disk, but only contain 240GB of data. However, when SQL Server restores a database it restores the database files to the size they were when the backup was taken, complete with empty space. So in the first case, despite only having 120GB of data in your database, you’re going to have to have 250GB of spare disk space to restore the database. And despite all the pleas on dba.stackexchange.com there’s no way around that. The other problem is that as you roll forward transaction backups the required amound of space on disk can change, so just because you can restore the full backup it doesn’t follow that you can restore the rest of the chain.

By running ReadFileList across all the files to be restored we can calculate the maximum filesize for each file in the database, and place them on filesystems with enough space or decided not to progress with a restore that’s doomed to failure. We can do this as follows:

$DBfiles = @{}
$DBfileSize = @()
foreach ($file in $RestoreFiles){
    $sqlsvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server($SQLServer)
    $restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore
    $devicetype = [Microsoft.SqlServer.Management.Smo.DeviceType]::File

    $restoredevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($file.FilePath,$devicetype)
    $restore.Devices.add($restoredevice)
    $dbfiles = $Restore.ReadFileList($sqlsvr)
    foreach ($tmpfile in $DBfiles){
        if (($dbfilesize.logicalName -Contains $tmpfile.logicalname) -eq $False){
            $tmpobj = New-Object System.Object
            $tmpobj | Add-Member -type NoteProperty -name LogicalName -value $tmpfile.LogicalName
            $tmpobj | Add-Member -type NoteProperty -name PhysicalName -value $tmpfile.PhysicalName
            $tmpobj | Add-Member -type NoteProperty -name Size -value $tmpfile.Size
            $DBFileSize += $tmpobj

        }
        if ($filet.size -gt ($dbfilesize | Where-Object {$_.LogicalName -eq $filet.LogicalName}).size){
            ($dbfilesize | Where-Object {$_.LogicalName -eq $filet.LogicalName}).size = $filet.size
        }

    }
    $restore.Devices.Remove($restoredevice)
}

$RestoreLocations = @("c:\restores\","d:\restores\","e:\restores\")
$LocationCount = 0
$MarginForError = 200*1024*1024
$FSO = New-Object -Com Scripting.FileSystemObject
$RelocateFile = @()
$CurrentlyNeeded = 0

$i = 0
while ($i -lt $RestoreLocations.Count){
    $CurrentlyNeeded[$i]=0
    $i++
}
$i=0

while ($i -lt $DBfileSize.count){
    if ($DBFileSize[$i].size + $MarginForError + $CurrentlyNeeded[$LocationCount] -gt $FSO.getdrive($(Split-Path $RestoreLocations[$LocationCount] -Qualifier)).AvailableSpace){
        $LocationCount++
        if($LocationCount = $RestoreLocations.Count){
            write-output "No Space anywhere for this file!"
            break;
        }
    }else{
        $rf = new-object -typename Microsoft.SqlServer.Management.Smo.RelocateFile
        $rf.LogicalFileName = $DBFileSize[$i].LogicalName
        $rf.PhysicalFileName = $RestoreLocations[$LocationCount]+(Split-Path $DBFileSize[$i].PhysicalName -Leaf)
        $RelocateFile += $rf
        Remove-Variable rf
        $CurrentlyNeeded[$LocationCount] += $DBfileSize[$i].size
        $i++
        $LocationCount = 0
    }
}

$RestoreServer = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server("RestoreServer")
$devicetype = [Microsoft.SqlServer.Management.Smo.DeviceType]::File
foreach ($backup in $RestoreFiles | sort-object LastLSN){
    $restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore
    $restoredevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($backup.filepath,$devicetype)
    $restore.devices.add($restoredevice)
    $restore.FileNumber = $backup.position
    $restore.Database = $backup.DatabaseName
    foreach ($f in $RelocateFile){
        $restore.RelocateFiles.add($f)
    }
    if ($backup.LastLSN -eq $backups[-1].LastLSN){
        $restore.NoRecovery = $False
    }else{
        $restore.NoRecovery = $True
    }
    $restore.SqlRestore($RestoreServer)

    $restore.Devices.Remove($restoredevice)
}

This snippet could potentially take a while to run as it has to read the filelist from every backup file passed into it, so if you’ve got 100 transaction logs to process after your Full Backup be prepared for a bit of a wait!

Quite a lot going on in here. We start off by declaring a hash table $DBfiles and an array $DBFileSize. Next we loop through our list of backup files in $RestoreFiles. For each one we call ReadFileList and put the restults into $dbfiles. We then loop through each file returned. If we’ve not seen the particular file before, we create an object holding it’s LogicalName, PhysicalName and Size. This Object is then added to the $DBFileSize array. If we have seen the file before then we compare the size returned by ReadFileList to the one in $DBFileSize, if the new value is larger we replace the value in $DBFileSize. And then we go back around the loop.

Once we’ve been through every file, we move on to actually checking our free space. We initialise a couple of variables first:

  • $RestoreLocations – An array holding all the allowed locations (1 per drive or we’ll get confused later please!)
  • $LocationCount – A counter to keep track of which Location from the above we’re currently looking at
  • $MarginForError – Just to leave some room for anything unexpected happening between checking and restoring, in the example this is set to 200MB
  • $FSO – FileScripting Object to use to get free drive space
  • $RelocateFile – An array which will hold out relocation information for pushing into our restore
  • $CurrentlyNeeded – A hash table we’ll programatically populate next. This is going to be used to keep track of the space our restore is going to use on each Location

Next we populate the hashtable $CurrentlyNeeded with an entry of 0 for every entry in our $RestoreLocations array. The we reset our counter $i to 0 for the next section.

We now loop through $Filesize. We’re using a While loop here as we want to loop through every possible location for every file in the array till we’ve either exhausted the possibilities, or we’ve found somewhere for it to go.

The first thing we do in the loop is to check to see if the size of the current file plus the amount we’re currently looking at using on the current drive plus our MarginForError is less than the amount of free space on the drive containing the folder. If there isn’t we increment our $LocationCount by 1, and go round the loop again with the same database file. If we increament $LocationCount beyond the end of our $RestoreLocations array we write out an error message and stop looping as we don’t have anywhere with enough free space to restore this file.

If on one of the loops we can find some space, then we build a new RelocateFile object ($rf) and populate with our file details. We add it to our array $RelocateFile, add the size of the restored file to the space used on the drive we’re going to put it on, and then increment $i so we move onto the next file, and reset $LocationCount to 0 so we start from the beginning of our locations again.

Next we use the same restore patten as we used in yesterday’s post, with the addition of:

    foreach ($f in $RelocateFile){
        $restore.RelocateFiles.add($f)
    }

to pass our newly created array of file relocations into the restore object one by one. Then all being well, we restore our database to it’s new home!

Today’s is a long post, but the core concepts are in the first half. The longer section afterwards is to give you some hints about how you can use this functionality to really streamline automating your backup.

Tomorrow, we’re going to start looking at restore to point in the database’s life other than the latest backup.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Tagged ,