Musings of a Data professional

Stuart Moore

Category: 31 days of SQL Server Backup and Restore with PowerShell Page 1 of 4

Day 31 of 31 Days of SQL Server Backup and Restore using PowerShell: Rounding Up

So here we are, post 34 in my 31 post series about using PowerShell to perform SQL Server backups and Restores. We’ve taken a fairly whistle stop tour of the main points. There’s been a couple of omissions, but hopefully I’ve provided you with plenty of examples you can use in your own situations, or at least some ideas about how you can approach automating your restores. I also hope I’ve shown you that automating those restores needn’t be the nightmare you might have been led to believe from reading through some of the dynamic T-SQL out there.

Much of the more advanced uses of these techniques is down to using deeper features of PowerShell rather the SMO features. If you want to learn more about PowerShell then I recommend the following books:

And the following blogs:

And the following blogs:

I have also been presenting on this topic at UK SQL Server User Groups this year. A set of slides, scripts and demos from one at Cardiff SQL User Group from October 2013 are available for download (cardiff-usergroup-powershell-backups-and-restores-01012013).

I’m also consolidating some of my production functions and routines into a PSBackup module, which I’m hoping to put up on GitHub so they can be worked on as a Open project by anyone who wants to contribute. When it’s up I’ll post an announcement.

I hope you’ve enjoyed the posts, and please feel free to leave a comment, drop me a mail or ping me on twitter if you have a questions.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Day 30 of 31 Days of SQL Server Backup and Restore using PowerShell: Recording for the Auditors

One of the reasons I became so keen on automating restores was that I’d regularly get requests from various auditors asking for examples of valid restores carried out in the last 3 months, or wanting me to justify my Recovery Time Objectives, or needing to ensure we had adequate DR testing. And there’s always a manager wanting reassurance that we could meet our SLA commitments.

By automating restores of our major production systems and recording the results, whenever a request came in I could quickly dump the information into Excel for them (remember, it’s not Real Information™ unless it’s in Excel).

So what sort of information should be be audited about restores? I find the following are a minimum that cover most requests, though be sure to check for any industry/business specifics that may apply to your own case.

  • Time of restore
    • Restores should ideally be attempted at different times throughout the day. This will highlight and potential slowdowns due to other activity on hardware or network
  • What database was restored
  • Which server was the restore performed on
    • If you have multiple restore/DR servers, it’s important you have a log of testing a restore on all of them to avoid having to use the one of the set that doesn’t work at a critical point.
  • How long it took
  • How much data was written out
    • This could be the amount of data on disk at the end of the backup, or you could calculate the total throughput of all backup files restored, or both
  • To what size did the database files grow during the restore
    • This may not be the same as the previous metric. This value will also include the empty space within data files, and accommodate any ‘shrinks’ that happened during the period being restored
  • User running the restore
    • Just so you can recreate any permissions issues
  • Original server of backup
  • Location of all backup files restored
  • Output (or lack of) from DBCC
    • If you’re using NO_INFOMSGS you may still want to log the fact that you had no reported errors, just to record that it had beem run
  • Output from in house check scripts
  • Log of any notifications sent to DBAs for further investigations

Once you have this information you can start to mine it for your own use as well. You can make sure that all your ‘matched’ hardware is actually working at the same speed, check that restoring whilst the network is under normal business load won’t add an extra hour to your RTO.

You can also start looking for trends, are your restores taking a lot longer since the new SAN was put in? or is Server A producing a lot more alerts on checking, perhaps there’s a underlying hardware problem to be investigated there?

A side bonus of this is also that your recovery hardware is being used. Rathe than just being sat there waiting for a disaster you’re actually reading and writing data from the drives. So now at 3am during a panic restore you can also be confident that you don’t have a dead spindle or a flaky drive controller in your server.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Day 29 of 31 Days of SQL Server Backup and Restore using PowerShell: Why I don’t use the checktables SMO method

2 different occasions at SQL Saturday Cambridge 2013 made me realise that I needed to ‘justify’ my use of Invoke-SQLCmd to run DBCC back on Day 21 of 31 Days of SQL Server Backup and Restore using PowerShell: Verfiying a restored database. One was during my session (Slides and scripts here), and the other was during Seb Matthews session where he said to view every use of Invoke-SQLCmd as a failure.

The first thing I don’t like about this particular message is what happens when you check a ‘good’ database:

Import-Module sqlps -DisableNameChecking

$sqlsvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server("Server1")

$db = $sqlsvr.Databases.item("GoodDB")

$db.CheckTables("None")

If you run this, you’ll notice you get nothing back, not a thing. This is because CheckTables runs with NO_INFOMSGS as default. And there’s no way to override it. This might work for some applications, but in some environments you would want to have an informational messages returned and recorded, as proof that all checks were ran and passed.

That’s strike 1 for me.

If you’re wondering what the parameter to CheckTables is, it’s the repair level. Accepted values are:

  • None
  • Fast
  • Rebuild
  • AllowDataLoss

Which operate in the same way as they do under T-SQL

If you’re lucky enough not to have a corrupt database to play with, then Paul Randal has some you can download: (Paul Randal, SQLSkills.com, Corrupt SQL Server Databases). Assuming you’ve restored the db as chkDBCC then you can check it as so:

Import-Module sqlps -DisableNameChecking

$sqlsvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server("Server1")

$db = $sqlsvr.Databases.item("chkDBCC")

$db.CheckTables("None")

This time we will get some output, unfortunately this time a not hugely useful generic error message:

Exception calling "CheckTables" with "1" argument(s): "Check tables failed for Database 'chkdbcc'. "
At line:5 char:1
+ $db.CheckTables("None")
+ ~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [], MethodInvocationException
    + FullyQualifiedErrorId : Fai

Strike 2

We can improve on this is we actually examine the error in a bit more detail by putting it in a Try, Catch block:

Import-Module sqlps -DisableNameChecking

$sqlsvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server("Server1")
$db = $sqlsvr.Databases.item("chkdbcc")

try{
    $db.CheckTables("None")
}
catch{[System.Exception]
  $err = $_.Exception
  while ( $err.InnerException )
    {
    $err = $err.InnerException
    write-output $err.Message
    };
}

And now we get a more informative response:

Check tables failed for Database 'chkdbcc'.
An exception occurred while executing a Transact-SQL statement or batch.
Check Catalog Msg 3853, State 1: Attribute (object_id=1977058079) of row (object_id=1977058079,column_id=1) in
 sys.columns does not have a matching row (object_id=1977058079) in sys.objects.
Check Catalog Msg 3853, State 1: Attribute (object_id=1977058079) of row (object_id=1977058079,column_id=2) in
 sys.columns does not have a matching row (object_id=1977058079) in sys.objects.
CHECKDB found 0 allocation errors and 2 consistency errors not associated with any single object.
CHECKDB found 0 allocation errors and 2 consistency errors in database 'chkDBCC'.

To get the full information returned in our previous examples:

Error       : 8992
Level       : 16
State       : 1
MessageText : Check Catalog Msg 3853, State 1: Attribute (object_id=1977058079) of row
              (object_id=1977058079,column_id=2) in sys.columns does not have a matching row
              (object_id=1977058079) in sys.objects.
RepairLevel :
Status      : 16
DbId        : 23
DbFragId    : 1
ObjectId    : 0
IndexId     : -1
PartitionId : 0
AllocUnitId : 0
RidDbId     : 23
RidPruId    : 0
File        : 0
Page        : 0
Slot        : 0
RefDbId     : 23
RefPruId    : 0
RefFile     : 0
RefPage     : 0
RefSlot     : 0
Allocation  : 1

Well, we can’t. As well as enforcing the NO_INFOMSGS clause, CheckTables doesn’t allow the use of TABLERESULTS.

Which is strike 3, and it’s out of here!

This is all personal opinion, but this is one of the very few DBA tasks where I will always opt to use Invoke-SQLCmd over an SMO object.

Hopefully we might see a better SMO option with the release of SQL Server 2014…..

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Day 28 of 31 Days of SQL Server Backup and Restore using PowerShell: Setting up Database Mirroring

A very practical use for automated SQL Server backups and restores with PowerShell is setting up High Availability solutions. Usually these will involve bringing 2 or more SQL Server instances into sync with each other, the usual solution being backups and restores. SQL Server Mirroring may have been deprecated in SQL Server 2012 in favour of AlwaysOn Availability Groups, but I still see plenty of installs that haven’t migrated yet so it’s still heavily in use. To setup SQL Server mirroring successfully there are a number of conditions that must be fulfilled.

  • All SQL Server instances should be of the same version. While you can set up mirroing from lower version to a high version (ie; 2008 -> 2008R2), if you fail over you get a free database upgrade thrown in and can never mirror back.
  • All SQL Server instances should be of the same edition (Enterprise, Standard, etc). You can get away with this if you are NOT running any edition specific features, but the minute you enable an “Enterprise” feature then your database won’t start on any instance that doesn’t support that version
  • Instances should have the same start ‘Log Sequence Number’ in their active transaction log. This ensures that all the ‘new’ transaction transmitted in the intial sync are working from the same start point. Prior to SQL Server 2008R2 you could just restore a current backup, now you need to restore a full backup and at least 1 transaction log
  • There should be a valid TCP mirroring endpoint on each instance
  • There should be a valid SQL Login with permissions to access the above Endpoints

All of this we can handle within PowerShell to make our lives nice and easy. The following is going to assume that you have a shared folder which both SQL Server instances have access to. If not, then you’ll just need to use Copy-Item cmdlet to manually copy the backup files across. We’re also going to assume that both instances are running under the same domain accounts so there’s no need to grant permissions on the endpoints

Import-Module SQPS -DisableNameChecking

#Setup some basic variables
$ServerPrimaryName = "Server1"
$ServerSecondaryName = "Server2"

#Let's do multiple databases
$Databases = ("db1","ProdDB2", "MirrorDB1")

#Both SQL Server instances should have read/write to here
$FileShare = "\\FileServer\DBShare\"

#Get all dbs onto the same log:
foreach($db in $Databases){
    Backup-SQLDatabase -Database $db -ServerInstance $ServerPrimaryName -BackupFile $FileShare+$db+"_full.bak"
    Backup-SQLDatabase -Database $db -ServerInstance $ServerPrimaryName -BackupFile $FileShare+$db+"_log.trn"
    Restore-SQLDatabase -Database $db -ServerInstance $ServerPrimaryName -BackupFile $FileShare+$db+"_full.bak" -NoRecovery
    Restore-SQLDatabase -Database $db -ServerInstance $ServerPrimaryName -BackupFile $FileShare+$db+"_log.trn" -NoRecovery
}

#Now we need to create a TCP Mirroring EndPoint on each Server
$ServerPrimary = New-Item $ServerPrimaryName
$ServerSecondary = New-Item $ServerSecondaryName
$EPName = "DBMirror-PS"
$EPPort = 7022

        $PrimaryEP = new-object ('Microsoft.SqlServer.Management.Smo.EndPoint -Argument $ServerPrimary, $EPName
        $PrimaryEP.ProtocolType = [Microsoft.SqlServer.Management.Smo.ProtocolType]::Tcp
        $PrimaryEP.EndpointType = [Microsoft.SqlServer.Management.Smo.EndpointType]::DatabaseMirroring
        $PPrimaryEP.Protocol.Tcp.ListenerPort = $EPPort
        $PrimaryEP.Payload.DatabaseMirroring.ServerMirroringRole = [Microsoft.SqlServer.Management.Smo.ServerMirroringRole]::Partner
        $PPrimaryEP.Create()
        $PrimaryEP.Start()

        $SecondaryEP  = new-object ('Microsoft.SqlServer.Management.Smo.EndPoint -Argument $ServerPrimary, $EPName
        $SecondaryEP.ProtocolType = [Microsoft.SqlServer.Management.Smo.ProtocolType]::Tcp
        $SecondaryEP.EndpointType = [Microsoft.SqlServer.Management.Smo.EndpointType]::DatabaseMirroring
        $SecondaryEP.Protocol.Tcp.ListenerPort = $EPPort
        $SecondaryEP.Payload.DatabaseMirroring.ServerMirroringRole = [Microsoft.SqlServer.Management.Smo.ServerMirroringRole]::Partner
        $SecondaryEP.Create()
        $SecondaryEP.Start()

foreach ($db in $Databases){
    $ServerPrimary.Databases.item($db).MirroringPartner = "TCP://"+$ServerPrimary.NetName+":"+$EPPort
    $ServerPrimary.Databases.item($db).alter()
    $ServerSecondary.Databases.item($db).MirroringPartner = "TCP://"+$ServerSecondary.NetName+":"+$EPPort
    $ServerSecondary.Databases.item($db).alter()
}

As you can see this is pretty simple, and easily reusable whenever you need to set DB Mirroring up. For this example we entered the database names explicitly, but this could be easily modified to mirror every database on an instance with the following changes:

Import-Module SQPS -DisableNameChecking

#Setup some basic variables
$ServerPrimaryName = "Server1"
$ServerSecondaryName = "Server2"
$ServerPrimary = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server($ServerPrimaryName)
$ServerSecondary = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server($ServerSecondaryName)

#Load all the databases in an instance
$Databases = $ServerPrimary.databases

#Both SQL Server instances should have read/write to here
$FileShare = "\\FileServer\DBShare\"

#Get all dbs onto the same log:
foreach($db in $Databases){
    Backup-SQLDatabase -Database $db -ServerInstance $ServerPrimaryName -BackupFile $FileShare+$db.name+"_full.bak"
    Backup-SQLDatabase -Database $db -ServerInstance $ServerPrimaryName -BackupFile $FileShare+$db.name+"_log.trn"
    Restore-SQLDatabase -Database $db -ServerInstance $ServerPrimaryName -BackupFile $FileShare+$db.name+"_full.bak" -NoRecovery
    Restore-SQLDatabase -Database $db -ServerInstance $ServerPrimaryName -BackupFile $FileShare+$db.name+"_log.trn" -NoRecovery
}

#Now we need to create a TCP Mirroring EndPoint on each Server

$EPName = "DBMirror-PS"
$EPPort = 7022

        $PrimaryEP = New-Object -TypeName Microsoft.SqlServer.Management.Smo.EndPoint($ServerPrimary, $EPName)
        $PrimaryEP.ProtocolType = [Microsoft.SqlServer.Management.Smo.ProtocolType]::Tcp
        $PrimaryEP.EndpointType = [Microsoft.SqlServer.Management.Smo.EndpointType]::DatabaseMirroring
        $PPrimaryEP.Protocol.Tcp.ListenerPort = $EPPort
        $PrimaryEP.Payload.DatabaseMirroring.ServerMirroringRole = [Microsoft.SqlServer.Management.Smo.ServerMirroringRole]::Partner
        $PPrimaryEP.Create()
        $PrimaryEP.Start()

        $SecondaryEP  = New-Object -TypeName Microsoft.SqlServer.Management.Smo.EndPoint($ServerPrimary, $EPName)
        $SecondaryEP.ProtocolType = [Microsoft.SqlServer.Management.Smo.ProtocolType]::Tcp
        $SecondaryEP.EndpointType = [Microsoft.SqlServer.Management.Smo.EndpointType]::DatabaseMirroring
        $SecondaryEP.Protocol.Tcp.ListenerPort = $EPPort
        $SecondaryEP.Payload.DatabaseMirroring.ServerMirroringRole = [Microsoft.SqlServer.Management.Smo.ServerMirroringRole]::Partner
        $SecondaryEP.Create()
        $SecondaryEP.Start()

foreach ($db in $Databases){
    $db.MirroringPartner = "TCP://"+$ServerPrimary.NetName+":"+$EPPort
    $$db.alter()
    $ServerSecondary.Databases.item($db.name).MirroringPartner = "TCP://"+$ServerSecondary.NetName+":"+$EPPort
    $ServerSecondary.Databases.item($db.name).alter()
}

And this certainly speeds things up if you need to set up a large number of mirror dbs in one fell swoop

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Day 27 of 31 Days of SQL Server Backup and Restore using PowerShell: Why the Long version?

You might have noticed over the last 26 days that I seem to prefer using the long form of PowerShell backups over the more concise cmdlet versions.

There’s real reason for this, as they’re all building up the same T-SQL BACKUP statements at the end of the day. However, I find that when automating things, building up a backup one option at a time like:

$Restore.Replace=$True
$Restore.NoRecovery = $False

This gives me very easy control of what gets passed in to my backup or restore.:

if ($x -eq y){
    $Restore.Replace=$True
}else{
    $Restore.Replace=$False
}

and so on for all the parameters you might want to add.

But with the newer Restore-SQLDatabase cmdlet you can’t use:

if ($x -eq y){
    $RecoveryOption = $True
}else{
    $RecoveryOption = $False
}
Restore-SQLDatabase -Database $DBName -BackupFile $backupfile -NoRecovery $RecoveryOption

As -NoRecovery is a switch, not a parameter. This means that just specifying the switch turns the feature on or off, there is no actual parameter to pass through to it. This can be a bit limiting when you’re wanting complete control within a flexible set of scripts.

I find the new cmdlets are much easier when you just want to quickly back something up from the command line. Today I used this 2 line to quickly backup all the databases on a dev/test server before we tried something destructive:

cd SQLSERVER:\SQL\DEVBOX1\DEFAULT\DATABASES
ls|%{$_|Backup-SQLDatabase}

Much simpler and quicker to type than anything else I can think of!

(Just in case you can’t read what is a great example of bad code, I’m making use of a couple of shortcuts in PowerShell:
ls is a system alias for Get-ChildItem, which coming from a Unix background I tend to use naturally. And % is an alias for foreach, and doing that on a collection of system objects defaults to stepping through the objects in the collection loading $_ on each iteration. Plus PowerShell doesn’t mind about missing spaces if things aren’t ambiguous. This type of code is great for one offs when you just want to do something quickly, but not something you ever want leave in proper scripts or functions, and is not fun to discover when debugging as it probably means shortcuts were also taken elsewhere!)

So to answer the original question, it’s just down to a personal preference, and using the right tool at the right time.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Day 26 of 31 Days of SQL Server Backup and Restore using PowerShell: Restoring SQL Server Analysis Services databases

Yesterday we looked at backing up SQL Server Analysis Services databases, and today we’ll look at restoring them.

Just as with backups, prior to SQL Server 2012 SQLPS restores had to be done a little differently. Unlike Backup which was a method of a Database object, Restore is a method of the Server class. This makes sense, as if you’re restoring a database it won’t necessarily exist for you to create an object from. In it’s simplest form you just need to provide the location of the backup file:

Import-Module SQLPS -DisableNameChecking

$SSASInstance = "SSASServer1\Instance1"

$BackupFile = "c:\backups\db1.abf"

$SSASServer = New-Object Microsoft.AnalysisServices.Server
$SSASServer.connect($SSASSInstance)

$SSASServer.Restore($BackupFile)

However the Restore method can take more parameters to allow for more flexibility, in much the same way as we saw with the Backup method yesterday. For Restore these are:

Parameter Type Description
BackupFile String Path to backup file
DatabaseName String Name of database to restore as (default is to restore with same name as backed up from
OverWrite Boolean Whether to overwrite an existing db, default is False
RestoreLocation An array of RestoreLoctions for restoring remote dbs
RestoreSecurity RestoreSecurity Enum Specifies what to do with security roles after the restore (Copy, Ignore, Skip)
Password String Password to restore a password protected backup
StorageLocations String Specify a different storage location for the restored db

These can also be specified by creating a RestoreInfo object and passing that to the Restore method. Just as with the Backup method yesterday to supply a value you must provide the preceeding values as well, even if it’s just empty strings or NULL.

With the release of the SQL Server 2012 PowerShell provide MS also release a Restore-ASDatabae cmdlet. This can make the restores much simple to automate:

Import-Module SQLPS -DisableNameChecking

$SSASInstance = "SSASServer1\Instance1"
$BackupFiles = Get-ChildItem "c:\SSASbackups\"

foreach ($file in $BackupFiles){
    Restore-ASDatabase -Sever $SSASInstance -RestoreFile $file.fullname -AllowOverwrite
}

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Day 25 of 31 Days of SQL Server Backup and Restore using PowerShell: Backing up SQL Server Analysis Services databases

So far in this series we’ve only looked at SQL Server database backups, but PowerShell can also be used to backup and restore SQL Server Analysis Server objects.

Prior to the release of the latest version of SQLPS with SQL Server 2012, SSAS backups were achieved via the Backup method of a Analysis Server Database object:

Import-Module SQLPS -DisableNameChecking

$SSASInstance = "SSASServer1\Instance1"
$DBName = "db1"

$BackupFile = $DBName+"_"+[DateTime]::Now.ToString("yyyyMMdd_HHmmss")+".abf"

$SSASServer = New-Object Microsoft.AnalysisServices.Server
$SSASServer.connect($SSASSInstance)

$DB = $SSASServer.databases[$DBName]
$DB.backup($BackupFile)

Not that we don’t have to build an backup object, or any devices. So this is all a lot simpler than a Data Platform backup. However there is less flexibility. Some other options are available as the Backup method is ‘overloaded’ (it has a number of different implementations). All the possible options are listed here.

Parameter Type Action
File String Filename to backup to
allowOverwrite Boolena Whether the backup can overwrite an existing file, default is now
backupRemotePartitions Boolean Whether remote partitions should be backuped up, default is no
RemoteLocations An array of Analysis Server Backup locations Where remote partitions should be backed up to
ApplyCompression Boolean Whether to compress the backup or not, default is off
Password String Password to encrypt backup files

Just to make life slightly more awkward you need to specify every parameter before the last one you want to use. So if you want to set Compression on, then you need to specify File, AllowOverwrite, backupRemotePartitions and RemoteLocations as well, even if just specifying empty strings like so:

$DB.backup($BackupFile,$False,$False,NULL,$True)

With SQLPS as provided with SQL Server 2012, there is another way. The CmdLet Backup-ASDatabase was introduced, which works in much the same way as Backup-SQLDatabase:

Backup-ASDatabase -BackupFile $backupfile -Server $SSASInstance -Name $DB -ApplyCompression

This is slightly easier to work with, and can be automated for all the dbs in an instance just like we did for the Data Platform examples:

Import-Module SQLPS -DisableNameChecking

$SSASInstance = "SSASServer1\Instance1"
$SSASServer = New-Object Microsoft.AnalysisServices.Server
$SSASServer.connect($SSASSInstance)

foreach($DB in $SSASServer.databases){
    $BackupFile = $$DB.Name+"_"+[DateTime]::Now.ToString("yyyyMMdd_HHmmss")+".abf"
    Backup-ASDatabase -BackupFile $backupfile -Server $SSASInstance -DataBase $DB -ApplyCompression
}

And as a bonus, this format is backward compatible with SQL Server 2008 and SQL Server 2008R2, you only need the SQL Server 2012 SQLPS module installed, which comes with the SQL Server client tools (free to download and use if you grab SQL Server 2012 Express SP1 or better). So it could make your life easier if you grab this even if you upgrade nothing else to 2012.

Note that all the usual Analysis Services restrictions on what is actually backup up apply when backing up via PowerShell as they would via any other method.

One advantage of backing up SSAS via PowerShell is that it’s very easy to ensure that the underlying datastores are also backed up at the same time.

Tomorrow we’ll look at restoring SSAS Databases.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Day 24 of 31 Days of SQL Server Backup and Restore using PowerShell: Error Checking

So far most of our scripts have been a little bereft of error checking. This has mainly been to make the easier to read. If you’ve been experimenting with the scripts, then it’s almost certain that you’ll have come across the following unhelpful error message from PowerShell:

Exception calling "SqlBackup" with "1" argument(s): "Backup failed for Server 'Server1'. "
At line:1 char:1
+ $Backup.SqlBackup($SQLSvr)
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [], MethodInvocationException
    + FullyQualifiedErrorId : FailedOperationException

You’ll probably also have discovered that amongst many things it can be a response to:

  • Wrong Server name
  • Permissions error on backup destination
  • Contradictory parameters
  • DB exists and you’ve not specified replace
  • You’ve specified a filename that’s already in use
  • Many, many, more

This means that for scripts that you want to be reliable for automation that you need to make sure you’ve got robust checking in place. Or at least a good logging system so you can go back and review the settings for each run. Common issues, and how to check for them are listed below

Missing folders, or files that already exist. PowerShell provides Test-Path – does what it says on the tin, it tests a path to see if it exists, so you can quickly check if a path exists before writing to it, it returns $True and $False as appropriate. So to make sure that your backup folder exists before writing to it:

if (Test-Path "c:\mybackups\" -eq $False){
    write-output "Missing folder"
}

This could be modified to create the missing path if required, or if you actually wanted the file not to be there you can redirect your process.

Getting the SQL Server name wrong or the server not being available. When you build your new SQL Server object like so:

$sqlsvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server("WrongServer")

you might think that PowerShell would warn you if the server didn’t exist, but it doesn’t. And you might not find out till you get the error message above. To check it as soon as you create it you need to try to use it:

$sqlsvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server("WrongServer")
try{
    $sqlsvr.databases | Out-Null
}
catch{
    Write-Host "SQL Server Not Available"
}

You’ll need to check the database you’re targeting for backup or restore exists, thought for different reasons. If you’re restoring and the database exists you need to set the Restore objects Replace property to $True:

$DBname="fdg"

try{
    $sqlsvr.Databases.item($DbName) | Get-Member -ErrorAction stop
}
catch{[System.Exception]
    <#     DB does not exist.      Choose to abort backup attempt.     Set a flag that can be checked in in the Finally     block      #>
    $DBexists = $false

}
finally{
    if ($DBexists -ne $False){
        <#         DB exists, choose whether to abort restore         or set .Replace = $True         #>
    }
}

A common problem when backing up or restoring is checking that the SQL Server Service account has the correct permissions to access the area. Remember that PowerShell is just submitting T-SQL to SQL Server to execute, so the account running the script may have permissions.

Checking the permissions takes a bit more code but is also possible. This assumes we have a validated SQL Server connection called $sqlsvr, and also that the account running the script has permissions to see the permissions on the requested object:

$serviceaccount = $sqlsvr.ServiceAccount

#Need to append \\ to the end of the domain so it works in -replace
$domain= $env:USERDOMAIN+"\\"

#Path to Folder we want to check
$path = "c:\folder1\folder2"

#Get Group memberships for Service Account
$groups = Get-ADPrincipalGroupMembership ($ServiceAccount -ireplace $domain,"")

#Build up an Array to hold all the group and account names
$accounts = @()
$accounts += $groups.samaccountname
$accounts += $ServiceAccount -ireplace $domain,""

#Set a check for whether we have write permissions
$write=0
$read=0

#Get the ACL on our specified path
$perms = get-acl $path

#For each ACL in array
foreach ($acl in $perms.access){
    #Then for each acccount/group
    foreach ($acc in $accounts){
        #If a right is assigned to a group or account
        if ("ads\"+$acc -ieq $acl.IdentityReference){
            #Check if it's a permission that gives us write access
            if (($acl.FileSystemRights -ilike "*FullControl*") -or ($acl.FileSystemRights -ilike "*Modify*")){
                if($acl.AccessControlType -ne "Allow"){
                    #Hit a Deny, overrides any other permission, so drop out of checking
                    $write = 0
                    break
                }
                #Increment our write check. Keep looping as we may not have hit a deny yet
                $write++

                #If we have write we have read
                $read++

            }elseif ($acl.FileSystemRights -contains "Read"){
                $read++
            }
        }
    }
}

if ($write -eq 0){
    write-output "SQL Server service account $serviceaccount doesn't have write access to $path"
    #Take action
}

if ($read -eq 0){
    write-output "SQL Server service account $serviceaccount doesn't have read access to $path"
    #Take action
}

If you’re accessing your path through a share, then you also need to check share permissions as they may override your FileSystem permissions (same assumptions as filesystem example above):

$serviceaccount = $sqlsvr.ServiceAccount

#Need to append \\ to the end of the domain so it works in -replace
$domain= $env:USERDOMAIN+"\\"

#Path to Folder we want to check
$path="\\Server1\Share3$\folder1"
#Split the path on \ to get the bits we want
$ShareArray = $path -Split [regex]::Escape('\')

#Get Group memberships for Service Account
$groups = Get-ADPrincipalGroupMembership ($ServiceAccount -ireplace $domain,"")

#Build up an Array to hold all the group and account names
$accounts = @()
$accounts += $groups.samaccountname
$accounts += $ServiceAccount -ireplace $domain,""

#Set a check for whether we have write permissions
$write=0
$read=0

$share=$sharearray[3].Trim()
$ShareSecurity = Get-WMIObject -Class Win32_LogicalShareSecuritySetting -Filter "name='$Share'"  -ComputerName $ShareArray[2]
$ShareSecurityDesc = $ShareSecurity.GetSecurityDescriptor().Descriptor
foreach($ace in $SD.DACL){
       foreach ($acc in $accounts){
            $acl = New-Object Security.AccessControl.FileSystemAccessRule($ace.Trustee.name, $ace.AccessMask, $ace.AceType)
        #If a right is assigned to a group or account
        if ($acc -ieq $acl.IdentityReference){
             #Check if it's a permission that gives us write access
            if (($acl.FileSystemRights -ilike "*FullControl*") -or ($acl.FileSystemRights -ilike "*Modify*")){
                if($acl.AccessControlType -ne "Allow"){
                    #Hit a Deny, overrides any other permission, so drop out of checking
                    $write = 0
                    break
                }
                #Increment our write check. Keep looping as we may not have hit a deny yet
                $write++
                 #If we have write we have read
                $read++

            }elseif ($acl.FileSystemRights -ilike "*Read*"){
                $read++
            }
        }
    }

}

Those examples cover a number of the most common errors I see when leaving backups and restores to run on their own. You can add more to cover cases as you discover them.

And at a minimum, ensure that everything is encased in try,catch, finally blocks (making sure to add -ErrorAction Stop to non terminating cmdlet. And the next most useful thing is to use the script() method to keep a record of what you’re asking SQL Server to do, a lot of the time you’ll look at it and realise what you’ve done wrong.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Day 23 of 31 Days of SQL Server Backup and Restore using PowerShell: It’s all just T-SQL, really

Yesterday we compared the performance of running backups with T-SQL and with PowerShell. And the conclusion was that on small backups there’s small overhead of building up the

At the end of the day, we shouldn’t have been expecting much of a difference in performance. All PowerShell’s SQLPS module is doing is using SQL Management Objects (SMO) to create T-SQL which is then submitted to SQL Sever. So most of the small performance overhead is purely the work of creating that T-SQL and then connecting to our SQL Server instance to submit it for execution.

We can exploit this feature in 2 ways:

  • Rather than executing the T-SQL we can just get PowerShell to give us the scripts so we can run them. This might be something you want to do in case of an important restore if you wanted to confirm the actions before committing.
  • Fault finding. SQLPS doesn’t offer the most informative error messages (in fact it quite ofter returns the same one for any error), so by extracting the T-SQL you can quickly see if the logic was correct, or if you’ve forgotten to create part of your backup or restore.

Getting the SQL Script is very simple, it’s just a substitution at the point of restore or backup:

  • Switch $backup.SQLBackup for $Backup.Script
  • Switch $Restore.SQLRestore for $Restore.Script
  • add the -Script swith to Restore-SQLDatabase or Backup-SQLDatabse cmdlets

With a simple example, it’s easy to see what will come out:

Import-Module "SQLPS" -DisableNameChecking

$ServerName = "Server1\SQL2012"
$SQLSvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server($ServerName)

$Db = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Database
$Db = $SQLSvr.Databases.Item("psdb1")

$Backup = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Backup
$Backup.Action = [Microsoft.SQLServer.Management.SMO.BackupActionType]::Database
$backup.BackupSetDescription = "Full Backup of "+$Db.Name
$Backup.Database = $db.Name

$BackupName = "c:\psbackups\"+$Db.Name+"_"+[DateTime]::Now.ToString("yyyyMMdd_HHmmss")+".bak"
$DeviceType = [Microsoft.SqlServer.Management.Smo.DeviceType]::File
$BackupDevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($BackupName,$DeviceType)

$Backup.Devices.Add($BackupDevice)
$Backup.Script($SQLSvr)
$Backup.Devices.Remove($BackupDevice)

will produce:

BACKUP DATABASE [psdb1] TO  DISK = N'c:\psbackups\psdb1_20130924_181155.bak' WITH  DESCRIPTION = N'Full Backup of psdb1', NOFORMAT, NOINIT, NOSKIP, REWIND, NOUNLOAD,  STATS = 10

But if we go for a more complex restore it provides a lot more. As an example let’s modify our Restore-SQLDatabasePointInTime function from Day 20

function Restore-SQLDatabasePointInTimeScript
{param($BackupFiles, [string]$SQLServer, [datetime]$PointInTime=(Get-Date).AddDays(1))

$ScriptOutPut = @()
$restore = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Restore
$restore.ToPointInTime = get-date($PointInTime) -format "MMM dd, yyyy hh:mm tt"

$devicetype = [Microsoft.SqlServer.Management.Smo.DeviceType]::File
foreach ($backup in $BackupFiles ){
$restoredevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($backup.filepath,$devicetype)

$restore.devices.add($restoredevice) | Out-Null
$restore.FileNumber = $backup.position
$restore.Database = $backup.DatabaseName
$ScriptOutput += $restore.Script($RestoreServer)

if ($backup.LastLSN -eq $BackupFiles[-1].LastLSN){
$ScriptOutput += "Restore database "+$restore.Database.tostring()+" with recovery"
}
$restore.Devices.Remove($restoredevice) | Out-Null
}
return $scriptoutput
}

And then call the same sequence of functions for our point in time restore operation, except substituting our new function at the very end. Then we can see that we get a very handy amount of code built for us:

RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_3_18.bak' WITH  FILE = 1,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_.bcf' WITH  FILE = 4,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_5_19.trn' WITH  FILE = 1,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_5_20.trn' WITH  FILE = 1,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_5_21.trn' WITH  FILE = 1,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_5_22.trn' WITH  FILE = 1,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_5_23.trn' WITH  FILE = 1,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_6_0.trn' WITH  FILE = 1,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_6_1.trn' WITH  FILE = 1,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_6_2.trn' WITH  FILE = 1,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_6_3.trn' WITH  FILE = 1,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_6_4.trn' WITH  FILE = 1,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_6_5.trn' WITH  FILE = 1,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_6_6.trn' WITH  FILE = 1,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_6_7.trn' WITH  FILE = 1,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_6_8.trn' WITH  FILE = 1,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_6_9.trn' WITH  FILE = 1,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
RESTORE DATABASE [PointInTime] FROM  DISK = N'C:\piit\PointInTime_6_10.trn' WITH  FILE = 1,  NOUNLOAD,  STATS = 10,  STOPAT = N'Sep 20, 2013 05:32 PM'
Restore database PointInTime with recovery

It is fact fine to do the restore, and retrieve the script during the same operation, so we could modify our original function to include the .Script() method as well, or make it optional via a switch or variable. Collecting the scripts for every restore/backup and then storing it would mean that you’d have a head start when trying to debug issues, or as more information to provide for auditors.

As the PowerShell submissions are ultimately just T-SQL you can use all your normal tools to monitor or control them.

Tomorrow we’ll look at

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Day 22 of 31 Days of SQL Server Backup and Restore using PowerShell: Performance comparisons

So far we’ve been looking at the cool stuff we can do with SQL Server Backups, Restores and PowerShell. And I don’t know about you, but it seems look like it’s going to make life a lot easier. But, is there a down side?

Well, as we’re DBA’s they first thing we usually worry about is performance. So let’s run some raw speed tests to see if there’s any unwanted overhead. For a backup, most of the overhead is going to be the speed of the underlying drives, so all these tests are run on the same rig. Which is SQL Server 2012 running on Windows 2012 in a VirtualBox virtual machine, PowerShell is 3.0.

We need a sizeable DB so we can get some decent timings. Nothing fancy here:

USE [master]
GO
CREATE DATABASE [SpeedTest]
 CONTAINMENT = NONE
 ON  PRIMARY
( NAME = N'SpeedTest', FILENAME = N'E:\speedtest\SpeedTest.mdf')
 LOG ON
( NAME = N'SpeedTest_log', FILENAME = N'E:\speedtest\SpeedTest_log.ldf')
GO
USE [SpeedTest]
GO
CREATE TABLE [dbo].[bigtable1](
	[dummyID] [int] IDENTITY(1,1) NOT NULL,
	[bigdatat] [varchar](8000) NULL
) ON [PRIMARY]

GO
SET ANSI_PADDING OFF
GO
/****** Object:  Table [dbo].[bigtable2]    Script Date: 24/09/2013 15:05:49 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[bigtable2](
	[dummyID] [int] IDENTITY(1,1) NOT NULL,
	[bigdatat] [varchar](8000) NULL
) ON [PRIMARY]

GO

CREATE TABLE [dbo].[bigtable3](
	[dummyID] [int] IDENTITY(1,1) NOT NULL,
	[bigdatat] [varchar](8000) NULL
) ON [PRIMARY]

GO

CREATE TABLE [dbo].[bigtable4](
	[dummyID] [int] IDENTITY(1,1) NOT NULL,
	[bigdatat] [varchar](8000) NULL
) ON [PRIMARY]

GO

And then bloat the tables with junk:

insert into Bigtable1 (bigdatat) values (replicate('junk',2000))
insert into Bigtable2 (bigdatat) values (replicate('junk',2000))
insert into Bigtable3 (bigdatat) values (replicate('junk',2000))
insert into Bigtable4 (bigdatat) values (replicate('junk',2000))
go 100000

Which gives us a database of 3GB. Tiny in the real world sense, but large enough to give us some usable stats.

To get the T-SQL timings we use the following script:

create table backuptime(
ident integer,
starttime datetime,
endtime datetime
)

declare @i integer  =1;

while @i<20
begin
    declare @starttime datetime, @endtime datetime;
    set @starttime = CURRENT_TIMESTAMP
    backup database Speedtest to disk='e:\sizetest\backup.bak' with init
    select @endtime = current_timestamp
    insert into backuptime values (@i,@starttime,@endtime)
    set @i = @i+1
end

We create a holding table, and then loop through 20 backups recording the start and finish times in our table.

And to get our PowerShell timings:

Import-Module SQLPS -DisableNameChecking

$Database = "SpeedTest"
$ServerName = "WIN-C0BP65U3D4G"

$SQLSvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server($ServerName)
$Db = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Database
$Db = $SQLSvr.Databases.Item($Database)
$BackupName = "e:\sizetest\speedtestps.bak"
$DeviceType = [Microsoft.SqlServer.Management.Smo.DeviceType]::File

$i=1

$times = @()

while ($i -lt 20){
    $StartTime = get-date
    $Backup = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Backup
    $Backup.Initialize = $True

    $Backup.Action = [Microsoft.SQLServer.Management.SMO.BackupActionType]::Database
    $Backup.Database = $db.Name

    $BackupDevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($BackupName,$DeviceType)

    $Backup.Devices.Add($BackupDevice)
    $Backup.SqlBackup($SQLSvr)
    $Backup.Devices.Remove($BackupDevice)

    remove-variable backup
    $times += new-timespan $StartTime $(get-date)
    $i++
}

We create the connection to SQL Server outside of our loop, as this would be the most costly part of the script but is only realy needed once for each connection to the server. The backup runs the same Full backup as our T-SQL version. We use the New-TimeSpan cmdlet to calculate the interval between start and finish, then store that in an array which we can later analyse through Measure-Object to get maximum, minimum and average.

So after running each procedure 3 times to get a consistent average what did we find?

From the T-SQL runs I got an average of 48.18s, with a longest of 55.59s, shortest of 45.94s and a standard deviation of 1.42s (Full result data here)

And from the PowerShell runs I got an average of 58.56s, with a longest of 72.91s, shortest of 56.48s and a standard deviation of 2.1s (Full result data here)

So PowerShell appears slower here. It’s certainly more variable (greater spread between max and min, and a larger Standard deviation means in general the times are across a general range ignoring the extremes).

Not looking great at the moment. So lets try doubling the size of the database and see if the differences stay consistent. Just rerun the insert block again. This time I’m just going to run the timings once each

Now the T-SQL runs give me an average of 111.8s, max of 118.59s, minimum of 107.87s and a standard deviation of 2.16s (Full result data here)

And PowerShell gives me an average of 110.48s, max of 121.1s, minimum of 107s and a standard deviation of 2.78s (Full result data here)

So now it shows that they’re pretty much even. Which shows that the PowerShell overhead is in creating the object and then sending it to a remote server

Hopefully that’s put some minds at rest about any potential performance issues using these PowerShell backup and restore methods over the more convential SQL Server methods.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

T-SQL Backups time results

starttime endtime Time difference (ms)
24/09/2013 15:24:23 24/09/2013 15:25:19 55593
24/09/2013 15:25:19 24/09/2013 15:26:07 47823
24/09/2013 15:26:07 24/09/2013 15:26:54 47363
24/09/2013 15:26:54 24/09/2013 15:27:42 48213
24/09/2013 15:27:42 24/09/2013 15:28:30 47576
24/09/2013 15:28:30 24/09/2013 15:29:17 46646
24/09/2013 15:29:17 24/09/2013 15:30:03 46326
24/09/2013 15:30:03 24/09/2013 15:30:51 48476
24/09/2013 15:30:51 24/09/2013 15:31:39 47656
24/09/2013 15:31:39 24/09/2013 15:32:26 46546
24/09/2013 15:32:26 24/09/2013 15:33:13 47756
24/09/2013 15:33:13 24/09/2013 15:34:01 47716
24/09/2013 15:34:01 24/09/2013 15:34:47 45940
24/09/2013 15:34:47 24/09/2013 15:35:35 47993
24/09/2013 15:35:35 24/09/2013 15:36:24 49250
24/09/2013 15:36:24 24/09/2013 15:37:14 50020
24/09/2013 15:37:14 24/09/2013 15:38:03 48333
24/09/2013 15:38:03 24/09/2013 15:38:51 48286
24/09/2013 15:38:51 24/09/2013 15:39:39 48086
24/09/2013 16:32:19 24/09/2013 16:33:06 47543
24/09/2013 16:33:06 24/09/2013 16:33:54 48000
24/09/2013 16:33:54 24/09/2013 16:34:41 47250
24/09/2013 16:34:41 24/09/2013 16:35:30 48350
24/09/2013 16:35:30 24/09/2013 16:36:17 47543
24/09/2013 16:36:17 24/09/2013 16:37:05 47426
24/09/2013 16:37:05 24/09/2013 16:37:52 47296
24/09/2013 16:37:52 24/09/2013 16:38:40 48030
24/09/2013 16:38:40 24/09/2013 16:39:29 49056
24/09/2013 16:39:29 24/09/2013 16:40:15 46010
24/09/2013 16:40:15 24/09/2013 16:41:02 46876
24/09/2013 16:41:02 24/09/2013 16:41:49 47360
24/09/2013 16:41:49 24/09/2013 16:42:37 47286
24/09/2013 16:42:37 24/09/2013 16:43:26 49096
24/09/2013 16:43:26 24/09/2013 16:44:13 47810
24/09/2013 16:44:13 24/09/2013 16:45:01 47183
24/09/2013 16:45:01 24/09/2013 16:45:48 47423
24/09/2013 16:45:48 24/09/2013 16:46:36 47953
24/09/2013 16:46:36 24/09/2013 16:47:24 48056
25/09/2013 07:32:54 25/09/2013 07:33:42 47663
25/09/2013 07:33:42 25/09/2013 07:34:30 47906
25/09/2013 07:34:30 25/09/2013 07:35:19 49300
25/09/2013 07:35:19 25/09/2013 07:36:08 48623
25/09/2013 07:36:08 25/09/2013 07:36:55 47323
25/09/2013 07:36:55 25/09/2013 07:37:44 49240
25/09/2013 07:37:44 25/09/2013 07:38:34 49756
25/09/2013 07:38:34 25/09/2013 07:39:23 48940
25/09/2013 07:39:23 25/09/2013 07:40:12 48746
25/09/2013 07:40:12 25/09/2013 07:40:59 47420
25/09/2013 07:40:59 25/09/2013 07:41:47 48020
25/09/2013 07:41:47 25/09/2013 07:42:35 47773
25/09/2013 07:42:35 25/09/2013 07:43:25 50163
25/09/2013 07:43:25 25/09/2013 07:44:16 51130
25/09/2013 07:44:16 25/09/2013 07:45:05 48266
25/09/2013 07:45:05 25/09/2013 07:45:53 48846
25/09/2013 07:45:53 25/09/2013 07:46:42 49130
25/09/2013 07:46:42 25/09/2013 07:47:32 49333
25/09/2013 07:47:32 25/09/2013 07:48:19 47490

PowerShell Backup Time results

Time difference (s)
57.30
56.48
59.92
57.85
57.72
57.99
58.43
57.15
57.75
57.57
58.08
57.57
57.92
58.24
58.85
59.68
57.91
58.89
56.84
58.23
59.27
58.78
58.26
58.18
57.62
59.20
58.96
57.80
57.94
57.56
58.80
57.45
56.66
59.02
59.24
58.72
57.96
58.11
72.91
59.89
58.91
58.84
58.52
59.15
59.22
58.59
57.33
58.93
58.23
58.58
58.37
58.10
60.17
58.09
58.15
56.82
59.31

T-SQL Results, 6GB database

StartTime EndTime Difference(ms)
2013-09-25 09:42:18.563 2013-09-25 09:44:17.153 118590
2013-09-25 09:44:17.160 2013-09-25 09:46:08.480 111320
2013-09-25 09:46:08.480 2013-09-25 09:47:58.923 110443
2013-09-25 09:47:58.923 2013-09-25 09:49:46.793 107870
2013-09-25 09:49:46.793 2013-09-25 09:51:38.323 111530
2013-09-25 09:51:38.323 2013-09-25 09:53:28.560 110236
2013-09-25 09:53:28.560 2013-09-25 09:55:21.393 112833
2013-09-25 09:55:21.393 2013-09-25 09:57:11.690 110296
2013-09-25 09:57:11.690 2013-09-25 09:59:03.420 111730
2013-09-25 09:59:03.420 2013-09-25 10:00:56.600 113180
2013-09-25 10:00:56.600 2013-09-25 10:02:48.073 111473
2013-09-25 10:02:48.073 2013-09-25 10:04:38.913 110840
2013-09-25 10:04:38.913 2013-09-25 10:06:30.870 111956
2013-09-25 10:06:30.870 2013-09-25 10:08:23.490 112620
2013-09-25 10:08:23.490 2013-09-25 10:10:16.580 113090
2013-09-25 10:10:16.580 2013-09-25 10:12:08.380 111800
2013-09-25 10:12:08.380 2013-09-25 10:14:02.007 113626
2013-09-25 10:14:02.007 2013-09-25 10:15:51.490 109483
2013-09-25 10:15:51.490 2013-09-25 10:17:42.697 111206

PowerShell results 6GB database

Time (s)
121.08
111.15
110.59
110.47
109.39
109.23
110.79
109.87
110.14
109.96
110.59
107.47
109.18
110.67
109.81
110.96
110.46
110.38
106.99

Page 1 of 4

Powered by WordPress & Theme by Anders Norén