Tag Archives: powershell

Bulk uploading CSVs to Azure SQL Database with dbatools

PowerShellLike most people we’re busy moving ourselves over to Azure, and like a lot of people (even though they won’t admit it) we’ve got years of data stashed away in CSV files. Go on, own up there’s years worth of department membership stashed away in a HR csv folder somewhere in your organisation šŸ˜‰

To get some of this data usable for reporting we’re importing it into Azure SQL Database so people can start working their way through it, and we can fix up errors before we push it through into Azure Data Lake for mining. Being a fan of dbatools it was my first port of call for automating something like this.

Just to make life interesting, I want to add a time of creation field to the data to make tracking trends easier. As this information doesn’t actually exist in the CSV columns, I’m going to use LastWriteTime as a proxy for the creationtime.

$Files = Get-ChildItem \\server\HR\HandSTraining\Archive -Filter *.Csv
$SqlCredential = Get-Credential

ForEach ($File in $Files | Where-Object {$_.Length -gt 0}) {
    $InputObject = ConvertFrom-Csv -InputObject (Get-Content $File.fullname -raw) -Header UserName, StatusName
    $InputObject | Add-Member -MemberType NoteProperty -Value $File.LastWriteTime -Name DateAdded
    $DataTable = $InputObject | Out-DbaDataTable
    Write-DbaDataTable -SqlInstance superduper.database.windows.net -Database PreventPBI -Table Training -InputObject $DataTable -Schema dbo -SqlCredential $SqlCredential -RegularUser
    Remove-Variable InputObject
}

Working our way through that, we have:

$Files = Gci \\server\HR\HandSTraining\Archive -Filter *.Csv
$SqlCredential = Get-Credential

Setup the basics we’re going to need throughout. Grab all the csv files off of our network share. I prefer grabbing credentials with Get-Credential, but if you’d prefer to embed them in the script you can use:

We then ForEach through all the files, having filterer out the empty ones

    $InputObject = ConvertFrom-Csv -InputObject (Get-Content $File.fullname -raw) -Header UserName, StatusName
    $InputObject | Add-Member -MemberType NoteProperty -Value $File.LastWriteTime -Name DateAdded

Load the file contents into a object with ConverTo-Csv. These csv files don’t contain a header row so I’m use the -Header parameter to force them in. This also helps with Write-DbaDataTable as I can ensure that the object names match with the Sql column names for the upload

Then we add a new property to our Input Object. Doing it this way we add it to every ‘row’ in the object at once. If you want to add multiple new properties just keep doing this for each one.

    $DataTable = $InputObject | Out-DbaDataTable
    Write-DbaDataTable -SqlInstance superduperdb.database.windows.net -Database HealthAndSafety -Table Training -InputObject $DataTable -Schema dbo -SqlCredential $SqlCredential -RegularUser

Convert our InputObject into a datatable, which is the format Write-DbaDataTable needs for input.

And then the command that does the loading, Write-DbaDataTable. There are only things here that you have to do differently for loading to an Azure SQL database as opposed to a normal SQL Server instance. For Azure SQL Databases you have to use a SQL Credential as the underlying dlls don’t work (yet) with the various Integrate Authentication options. You need to use the RegularUser switch. Normally dbatools will assume you have sysadmin rights on your SQL Server instance as they are needed for many of the tasks. In an Azure SQL Database you can’t have those rights as they don’t exists, so without Regular user you’ll get a nice error message. Just something to look out for, I’ve tripped myself up in the past when repointing load scripts.

Then we drop InputObject and go round the loop again until we’re finished.

Easy and very quick, and now I can just point PowerBI at it and let the users and analysts work out what they want to do with it.

Tagged , ,

Complex SQL Server restore scenarios with the dbatools Restore-DbaDatabase pipeline

dbatools logoNo matter how hard the dbatools team try, there’s always someone who wants to do things we’d never thought. This is one of the great things with getting feedback direct from a great community. Unfortunately a lot of these ideas are either too niche to implement, or would be a lot of complex code for a single use case

As part of the Restore-DbaDatabase stack rewrite, I wanted to do make things easier for users to be able to get their hands dirty within the Restore stack. Not necessarily needing to dive into the core code and the world of Github Pull Requests, but by manipulating the data flowing through the pipeline using standard PowerShell techniques All the while being able to do the heavy listing with out code.

So, below the fold we’ll be looking at some examples of how you can start going to town with your restores
Continue reading

Tagged , ,

Debugging the new dbatools Restore-DbaDatabase pipeline

A new version of dbatools Restore-DbaDatabase command was released into the wild this week. One of the main aims of this release was to make it easier to debug failures in the restore process, and to drag information out of the pipeline easily (and anonymously) so we can increase our Pestering of the module with Unit and Integration tests.

So I’d like to share some of the features I’ve put in so you can take part.

The biggest change is that Restore-DbaDatabase is now a wrapper around 5 public functions. The 5 functions are:

  • Get-DbabackupInformation
  • Select-DbabackupInformation
  • Format–DbabackupInformation
  • Test–DbabackupInformation
  • Invoke-DbaAdvancedRestore

These can be used individually for advanced restore scenarios, I’ll go through some examples in a later post.

For now it’s enough to know that Restore-DbaDatabase is a wrapper around this pipeline:

Get-DbabackupInformation |Select-DbabackupInformation | Format-DbabackupInformation | Test-DbabackupInformation | Invoke-DbaAdvancedRestore

and it’s other function is passing parameters into these sub functions as needed.

With version of Restore-DbaDatabase you were restricted to throwing data into one end, and seeing what came out of the other end, with some insight produced by Verbose messages. Now things can be stepped through, data extracted as need, and in a format that plugs straight into out testing functions.

Get-DbaBackupInformation

This is the function that gets all of the information about backup files. It scans the given paths, and uses Read-DbaBackupHeader to extract the information from them. This is stored in a dbatools BackupHistory object (this is the same as the output from Get-DbaBackupHistory, so we are standardising on a format for Backup information to be passed between functions).

So this would be a good place to check that you’ve gotten the files you think you should have, and is also the first place we’d be looking if you had a report of a break in the LSN chain

To get the output from the pipeline at this point we use the GetBackupInformation parameter:

Restore-DbaDatabase - -GetBackupInformation gbi

This will create a globally scoped variable $gbi containing the ouput from Get-DbaBackupHistory. Note, that when passing the name to Restore-DbaDatabase you do not need to specify the $.

If you want to stop execution at this point, then use the -StopAfterGetBackupInformation switch. This will stop Restore-DbaDatabase from going any further.

This is also a good way of saving time on future runs, as the BackupHistory object can be passed straight in, saving the overhead of reading all the file heasers again:

$gbi | Restore-DbaDatabase [Usual Parameters] -TrustDbBackupHistory

Select-DbaBackupInformation

Here we filter down the output from Get-DbaBackupInformation to restore to the point in time requested, or the latest point we can. This means we find :
– the last full backup before the point in time
– the latest differential between the full backup and the point in time
– and then all transaction log backups to get us to the requested time
This is done for every database found in the BackupHistory object

Here is where we’d begin looking for issues if you had a ‘good’ LSN chain from Get-DbaBackupInformation and then it broke.

To get this data you use the SelectBackupInformation parameter, passing in the name of the variable you want to store the data in (without the $ as per GetBackupInformation above)

There is also a corresponsing StopAfterSelectBackupInformation switch to halt processing at this point. We stop processing at the first stop in the pipeline, so specifying multiple StopAfter* switches won’t have an effect

Format-DbaBackupInformation

This function performs the transforms on the BackupHistory object per the parameters pushed in. This includes renaming databases, and file moves and rename. For everything we touch we add an extra property of Orignal to the BackupHistory object. For example the original name of the database will be in OriginalDatabase, and the target name will be in Database

So this is a good spot to test why transforms aren’t working as expected.

To get data out at this pipeline stage use the FormatBackupInformation paramter with a variable name. And as normal it has an accompanying StopAfterFormatBackupInformation switch to halt things there

Test-DbaBackupInformation

Before passing the BackupHistory object off to be restored we do some checks to make sure everything is OK. The following checks are made:

  • LSN chain complete
  • Does a destination file exist, if owned by a different database then fail
  • Does a destination file exist, if owned by the database being restored is WithReplace specfied
  • Can SQL Server see and write to all the destination folders
  • Can SQL Server create any destination folders missing
  • Can SQL Server see all the backup files

If a database passes all these checks then it’s backup history is marked as restorable by the IsVerified property being set $True.

To get the data stream out at this point use the TestBackupInformation parameter.

General Errors with restores

Once we’re past these stages, then our error reporting is at the mercy of the SMO Restore class. This doesn’t always provide an obvious cause straight away. Usually the main error can be found with:

$error[1] | Select-Object *

We like to think we capture most restore failure scenarios nicely, but if you find something we don’t then please let you know, either on Slack or by raising a Github issue

As usually the dbatools terminating error will be in $error[0].

Providing the information for testing or debugging.

If you’re running in to problems then the dbatools team may ask you to provide the output from one of these stages so we can debug it, or incorporate the information into our tests.

Of course you won’t want to share confidential information with us, so we would recommend anonymising your data. My normal way of doing this is to use these 2 stubbing functions:

So if we’ve asked for the Select-DbaBackupInformation the process would be:

Restore-DbaDatabase -[Normal parameters] -SelectBackupInfomation sbi -StopAfterSelectBackupInformation
Filter-DbaToolsHelpRequest $sbi
$sbi | Export-CliXml -Depth -Path c:\some\path\file.xml

And then upload the resulting xml file.

This method will anonymise the values in ComputerName, InstanceName, SqlInstance, Database, UserName, Path, FullName, FileList, OriginalDatabase, OriginalFileList, OriginalFullName and ReplaceDatabaseName. But will produce the same output for the same input, so we can work with multiple database sets at once.

I hope that’s been of some help. As always if you’ve a question then drop a comment below, ping me on twitter (@napalmgram) or raise an issue with dbatools on Slack or Github

Tagged ,

Using the SQLAutoRestores PowerShell Module

PowerShellThe SQLAutoRestores module as currently stands (04/08/2016 Ā v 0.9.0.0) is very much based on my personal usage patterns, but the plan is to make it as flexible as possible. So if you see something in this workflow that you’d like to change, or need something to cope with your SQL Sever backup procedures, then please leave a comment below, or drop in a feature request at github –Ā https://github.com/Stuart-Moore/SQLAutoRestores/issues

Current practice at work is that all SQL Server databases are backed up, either to a central share, or locally and then transferred to the central share. So we end up with a backup folder structure like:

Root 
 |
 +-Server1
 |    +-DB1
 |    +-DB2
 |
 +-Server2
      +-DB1
      +-DB2

I want to randomly pick a set of backup files to restore. So the first thing I want to do is to get all the ‘Bottom folders’ of this directory tree:

$folders = Get-BottomFolders \\backupserver\share

And if I’ve split the backups across multiple shares I can add more:

$folders += Get-BottomFolders \\server2\backups$\

So now I want to pick a folder at random from those. But, not everyone cleans up after themselves, so there could be folders that don’t contain anything we’re interested in. So there’s a Test-DBBackupsExist function that makes sure we have a file. So we loop until we get something useful:

$RestoreFolder = Get-RandomElement $folders
while (!(Test-DBBackupsExist $RestoreFolder)){
    $RestoreFolder = Get-RandomElement $folders
}

Great, we’ve got a folder with some SQL Server backups in it. Now we need to see what’s in the backup files. So we do a scan of the file headers, which needs a SQL Server, so we build a SQL Server connection as well:

$SQLconnection = New-SQLConnection 'server1\instance2'
$BackupObjects = Get-DBBackupObject -InputPath $RestoreFolder -ServerInstance $SQLconnection

This returns a simple PowerShell Object containing the header hightlights from each file in the folder.

Note; at this point we’ve not checked we’ve gotten a complete restorable set of files. For all we know, we got 30 Transaction log files and no Full Backup to start from!

I prefer to restore databases to random points in time rather than just the latest available. This gives a wider range of options to compare, and might just mean that you’ll discover than your SAN is corrupting the 22:15 t-log backup.

The next function checks we’ve got at least one ‘anchoring’ full backup, picks the earliest point in time that backup covers, and then gets the latest point in time covered by the backup files, and returns a random point between those 2 extremes. This will be our Recovery Point Objective

$TimeToRestore = Get-PointInTime -BackupsObject $BackupObjects

We then filter out backup files to just those needed to his this point in time:

$Objective = Get-RestoreSet -BackupsObject $BackupObjects -TargetTime $TimeToRestore

Or if you did just want the latest point then you can:

$Objective = Get-RestoreSet -BackupsObject $BackupObjects -Latest

Now we deal with moving the restored database files to a different location:

$Objective = Get-FileRestoreMove -BackupsObject $Objective -DestinationPath e:\some\path

And now we run some tests before the ‘expensive’ time taking restore itself. First off we’ll check we’re not about to clobber another database:

Test-DatabaseExists -RestoreSQLServer $SQLconnection -DatabaseName $Objective[0].DatabaseName

Then we check we have enough space to restore. This includes checking for any file growth during the restore (if your transaction log grows to a stupid size during the day, then it’ll be grown to that size during the restore and sized down later on, so you need to accomdate the largest amount of space your database occupies, not just it’s final size):

Test-RestoreSpace -BackupsObject $Objective -RestoreSQLServer $SQLconnection -RestorePath e:\some\Path

And then we test the difference beteen the SQL Server version of the instance that did the backup and the SQL Server instance we’re asking to perform the restore. Microsoft state that restoring more that 2 major versions isn’t allowed, so we fail it in this case (non SQL Server backups aren’t supported (yet!))

Test-DBRestoreVersion -BackupsOject $Objective -RestoreSQLServer $SQLconnection

And finally we restore the database:

Restore-Database -BackupsObject $Objective -RestoreSQLServer $SQLconnection -RestoreTime $TimeToRestore

Now, we want to check the restore is fine. It is possible to restore a corrupt database with no errors! (Demo and example here) :

Test-Database -DatabaseName $Objective.Databasename -RestoreSQLServer $SQLconnection

And then clean up after ourselves:

Remove-Database -DatabaseName $Objective.Databasename -RestoreSQLServer $SQLconnection

Rinse, repeat ad infinitum. I’ve this process running 24×7 on a dedicated restore instance. On average I restore 80 databases a day and cover every production database in a 2 week windows (it’s random so not guaranteed, but I have a priority list that skews it!)

Currently I collect my statistics with some simple Send-MailMessage usage, but I want something more robust in this module, so thats on the list of things to get fixed before we go to 1.0.0.0 properly.

Hopefully that’s given some ideas on how to use the module. I’d love to hear any ideas on improvements or how you’d want to use it in your environment. Comment here, drop me an email, or ping me on twitter (accounts all linked top right).

Tagged , ,

Doing something a bit different at SQL Saturday Exeter 2014: PowerShell for the curious SQL Server DBA

sqlsat269_webHaving had such a great time at last years SQL Saturday in Exeter, I was very happy to see the SQL Southwest team announce another one for 2014 on Saturday 22nd March (Register here).

So, I’ve been offered the chance to present 2 consecutive sessions covering 1 topic, 100 minutes to fill with something new. I’ve decided that based on feedback and questions from my “Using PowerShell for Automating Backups and Restores with PowerShell” presentation I’ll be filling the 100 minutes with:

PowerShell for the curious SQL Server DBA

Continue reading

Tagged , ,

Day 31 of 31 Days of SQL Server Backup and Restore using PowerShell: Rounding Up

So here we are, post 34 in my 31 post series about using PowerShell to perform SQL Server backups and Restores. We’ve taken a fairly whistle stop tour of the main points. There’s been a couple of omissions, but hopefully I’ve provided you with plenty of examples you can use in your own situations, or at least some ideas about how you can approach automating your restores. I also hope I’ve shown you that automating those restores needn’t be the nightmare you might have been led to believe from reading through some of the dynamic T-SQL out there.

Much of the more advanced uses of these techniques is down to using deeper features of PowerShell rather the SMO features. If you want to learn more about PowerShell then I recommend the following books:

And the following blogs:

And the following blogs:

I have also been presenting on this topic at UK SQL Server User Groups this year. A set of slides, scripts and demos from one at Cardiff SQL User Group from October 2013 are available for download (cardiff-usergroup-powershell-backups-and-restores-01012013).

I’m also consolidating some of my production functions and routines into a PSBackup module, which I’m hoping to put up on GitHub so they can be worked on as a Open project by anyone who wants to contribute. When it’s up I’ll post an announcement.

I hope you’ve enjoyed the posts, and please feel free to leave a comment, drop me a mail or ping me on twitter if you have a questions.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is availableĀ here.

Tagged ,

Day 30 of 31 Days of SQL Server Backup and Restore using PowerShell: Recording for the Auditors

One of the reasons I became so keen on automating restores was that I’d regularly get requests from various auditors asking for examples of valid restores carried out in the last 3 months, or wanting me to justify my Recovery Time Objectives, or needing to ensure we had adequate DR testing. And there’s always a manager wanting reassurance that we could meet our SLA commitments.

By automating restores of our major production systems and recording the results, whenever a request came in I could quickly dump the information into Excel for them (remember, it’s not Real Informationā„¢ unless it’s in Excel).

So what sort of information should be be audited about restores? I find the following are a minimum that cover most requests, though be sure to check for any industry/business specifics that may apply to your own case.

  • Time of restore
    • Restores should ideally be attempted at different times throughout the day. This will highlight and potential slowdowns due to other activity on hardware or network
  • What database was restored
  • Which server was the restore performed on
    • If you have multiple restore/DR servers, it’s important you have a log of testing a restore on all of them to avoid having to use the one of the set that doesn’t work at a critical point.
  • How long it took
  • How much data was written out
    • This could be the amount of data on disk at the end of the backup, or you could calculate the total throughput of all backup files restored, or both
  • To what size did the database files grow during the restore
    • This may not be the same as the previous metric. This value will also include the empty space within data files, and accommodate any ‘shrinks’ that happened during the period being restored
  • User running the restore
    • Just so you can recreate any permissions issues
  • Original server of backup
  • Location of all backup files restored
  • Output (or lack of) from DBCC
    • If you’re using NO_INFOMSGS you may still want to log the fact that you had no reported errors, just to record that it had beem run
  • Output from in house check scripts
  • Log of any notifications sent to DBAs for further investigations

Once you have this information you can start to mine it for your own use as well. You can make sure that all your ‘matched’ hardware is actually working at the same speed, check that restoring whilst the network is under normal business load won’t add an extra hour to your RTO.

You can also start looking for trends, are your restores taking a lot longer since the new SAN was put in? or is Server A producing a lot more alerts on checking, perhaps there’s a underlying hardware problem to be investigated there?

A side bonus of this is also that your recovery hardware is being used. Rathe than just being sat there waiting for a disaster you’re actually reading and writing data from the drives. So now at 3am during a panic restore you can also be confident that you don’t have a dead spindle or a flaky drive controller in your server.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is availableĀ here.

Tagged ,

Day 29 of 31 Days of SQL Server Backup and Restore using PowerShell: Why I don’t use the checktables SMO method

2 different occasions at SQL Saturday Cambridge 2013 made me realise that I needed to ‘justify’ my use of Invoke-SQLCmd to run DBCC back on Day 21 of 31 Days of SQL Server Backup and Restore using PowerShell: Verfiying a restored database. One was during my session (Slides and scripts here), and the other was during Seb Matthews session where he said to view every use of Invoke-SQLCmd as a failure.

The first thing I don’t like about this particular message is what happens when you check a ‘good’ database:

Import-Module sqlps -DisableNameChecking

$sqlsvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server("Server1")

$db = $sqlsvr.Databases.item("GoodDB")

$db.CheckTables("None")

If you run this, you’ll notice you get nothing back, not a thing. This is because CheckTables runs with NO_INFOMSGS as default. And there’s no way to override it. This might work for some applications, but in some environments you would want to have an informational messages returned and recorded, as proof that all checks were ran and passed.

That’s strike 1 for me.

If you’re wondering what the parameter to CheckTables is, it’s the repair level. Accepted values are:

  • None
  • Fast
  • Rebuild
  • AllowDataLoss

Which operate in the same way as they do under T-SQL

If you’re lucky enough not to have a corrupt database to play with, then Paul Randal has some you can download: (Paul Randal, SQLSkills.com, Corrupt SQL Server Databases). Assuming you’ve restored the db as chkDBCC then you can check it as so:

Import-Module sqlps -DisableNameChecking

$sqlsvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server("Server1")

$db = $sqlsvr.Databases.item("chkDBCC")

$db.CheckTables("None")

This time we will get some output, unfortunately this time a not hugely useful generic error message:

Exception calling "CheckTables" with "1" argument(s): "Check tables failed for Database 'chkdbcc'. "
At line:5 char:1
+ $db.CheckTables("None")
+ ~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [], MethodInvocationException
    + FullyQualifiedErrorId : Fai

Strike 2

We can improve on this is we actually examine the error in a bit more detail by putting it in a Try, Catch block:

Import-Module sqlps -DisableNameChecking

$sqlsvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server("Server1")
$db = $sqlsvr.Databases.item("chkdbcc")

try{
    $db.CheckTables("None")
}
catch{[System.Exception]
  $err = $_.Exception
  while ( $err.InnerException )
    {
    $err = $err.InnerException
    write-output $err.Message
    };
}

And now we get a more informative response:

Check tables failed for Database 'chkdbcc'.
An exception occurred while executing a Transact-SQL statement or batch.
Check Catalog Msg 3853, State 1: Attribute (object_id=1977058079) of row (object_id=1977058079,column_id=1) in
 sys.columns does not have a matching row (object_id=1977058079) in sys.objects.
Check Catalog Msg 3853, State 1: Attribute (object_id=1977058079) of row (object_id=1977058079,column_id=2) in
 sys.columns does not have a matching row (object_id=1977058079) in sys.objects.
CHECKDB found 0 allocation errors and 2 consistency errors not associated with any single object.
CHECKDB found 0 allocation errors and 2 consistency errors in database 'chkDBCC'.

To get the full information returned in our previous examples:

Error       : 8992
Level       : 16
State       : 1
MessageText : Check Catalog Msg 3853, State 1: Attribute (object_id=1977058079) of row
              (object_id=1977058079,column_id=2) in sys.columns does not have a matching row
              (object_id=1977058079) in sys.objects.
RepairLevel :
Status      : 16
DbId        : 23
DbFragId    : 1
ObjectId    : 0
IndexId     : -1
PartitionId : 0
AllocUnitId : 0
RidDbId     : 23
RidPruId    : 0
File        : 0
Page        : 0
Slot        : 0
RefDbId     : 23
RefPruId    : 0
RefFile     : 0
RefPage     : 0
RefSlot     : 0
Allocation  : 1

Well, we can’t. As well as enforcing the NO_INFOMSGS clause, CheckTables doesn’t allow the use of TABLERESULTS.

Which is strike 3, and it’s out of here!

This is all personal opinion, but this is one of the very few DBA tasks where I will always opt to use Invoke-SQLCmd over an SMO object.

Hopefully we might see a better SMO option with the release of SQL Server 2014…..

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is availableĀ here.

Tagged ,

Day 28 of 31 Days of SQL Server Backup and Restore using PowerShell: Setting up Database Mirroring

A very practical use for automated SQL Server backups and restores with PowerShell is setting up High Availability solutions. Usually these will involve bringing 2 or more SQL Server instances into sync with each other, the usual solution being backups and restores. SQL Server Mirroring may have been deprecated in SQL Server 2012 in favour of AlwaysOn Availability Groups, but I still see plenty of installs that haven’t migrated yet so it’s still heavily in use. To setup SQL Server mirroring successfully there are a number of conditions that must be fulfilled.

  • All SQL Server instances should be of the same version. While you can set up mirroing from lower version to a high version (ie; 2008 -> 2008R2), if you fail over you get a free database upgrade thrown in and can never mirror back.
  • All SQL Server instances should be of the same edition (Enterprise, Standard, etc). You can get away with this if you are NOT running any edition specific features, but the minute you enable an “Enterprise” feature then your database won’t start on any instance that doesn’t support that version
  • Instances should have the same start ‘Log Sequence Number’ in their active transaction log. This ensures that all the ‘new’ transaction transmitted in the intial sync are working from the same start point. Prior to SQL Server 2008R2 you could just restore a current backup, now you need to restore a full backup and at least 1 transaction log
  • There should be a valid TCP mirroring endpoint on each instance
  • There should be a valid SQL Login with permissions to access the above Endpoints

All of this we can handle within PowerShell to make our lives nice and easy. The following is going to assume that you have a shared folder which both SQL Server instances have access to. If not, then you’ll just need to use Copy-Item cmdlet to manually copy the backup files across. We’re also going to assume that both instances are running under the same domain accounts so there’s no need to grant permissions on the endpoints

Import-Module SQPS -DisableNameChecking

#Setup some basic variables
$ServerPrimaryName = "Server1"
$ServerSecondaryName = "Server2"

#Let's do multiple databases
$Databases = ("db1","ProdDB2", "MirrorDB1")

#Both SQL Server instances should have read/write to here
$FileShare = "\\FileServer\DBShare\"

#Get all dbs onto the same log:
foreach($db in $Databases){
    Backup-SQLDatabase -Database $db -ServerInstance $ServerPrimaryName -BackupFile $FileShare+$db+"_full.bak"
    Backup-SQLDatabase -Database $db -ServerInstance $ServerPrimaryName -BackupFile $FileShare+$db+"_log.trn"
    Restore-SQLDatabase -Database $db -ServerInstance $ServerPrimaryName -BackupFile $FileShare+$db+"_full.bak" -NoRecovery
    Restore-SQLDatabase -Database $db -ServerInstance $ServerPrimaryName -BackupFile $FileShare+$db+"_log.trn" -NoRecovery
}

#Now we need to create a TCP Mirroring EndPoint on each Server
$ServerPrimary = New-Item $ServerPrimaryName
$ServerSecondary = New-Item $ServerSecondaryName
$EPName = "DBMirror-PS"
$EPPort = 7022

        $PrimaryEP = new-object ('Microsoft.SqlServer.Management.Smo.EndPoint -Argument $ServerPrimary, $EPName
        $PrimaryEP.ProtocolType = [Microsoft.SqlServer.Management.Smo.ProtocolType]::Tcp
        $PrimaryEP.EndpointType = [Microsoft.SqlServer.Management.Smo.EndpointType]::DatabaseMirroring
        $PPrimaryEP.Protocol.Tcp.ListenerPort = $EPPort
        $PrimaryEP.Payload.DatabaseMirroring.ServerMirroringRole = [Microsoft.SqlServer.Management.Smo.ServerMirroringRole]::Partner
        $PPrimaryEP.Create()
        $PrimaryEP.Start()

        $SecondaryEP  = new-object ('Microsoft.SqlServer.Management.Smo.EndPoint -Argument $ServerPrimary, $EPName
        $SecondaryEP.ProtocolType = [Microsoft.SqlServer.Management.Smo.ProtocolType]::Tcp
        $SecondaryEP.EndpointType = [Microsoft.SqlServer.Management.Smo.EndpointType]::DatabaseMirroring
        $SecondaryEP.Protocol.Tcp.ListenerPort = $EPPort
        $SecondaryEP.Payload.DatabaseMirroring.ServerMirroringRole = [Microsoft.SqlServer.Management.Smo.ServerMirroringRole]::Partner
        $SecondaryEP.Create()
        $SecondaryEP.Start()

foreach ($db in $Databases){
    $ServerPrimary.Databases.item($db).MirroringPartner = "TCP://"+$ServerPrimary.NetName+":"+$EPPort
    $ServerPrimary.Databases.item($db).alter()
    $ServerSecondary.Databases.item($db).MirroringPartner = "TCP://"+$ServerSecondary.NetName+":"+$EPPort
    $ServerSecondary.Databases.item($db).alter()
}

As you can see this is pretty simple, and easily reusable whenever you need to set DB Mirroring up. For this example we entered the database names explicitly, but this could be easily modified to mirror every database on an instance with the following changes:

Import-Module SQPS -DisableNameChecking

#Setup some basic variables
$ServerPrimaryName = "Server1"
$ServerSecondaryName = "Server2"
$ServerPrimary = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server($ServerPrimaryName)
$ServerSecondary = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server($ServerSecondaryName)

#Load all the databases in an instance
$Databases = $ServerPrimary.databases

#Both SQL Server instances should have read/write to here
$FileShare = "\\FileServer\DBShare\"

#Get all dbs onto the same log:
foreach($db in $Databases){
    Backup-SQLDatabase -Database $db -ServerInstance $ServerPrimaryName -BackupFile $FileShare+$db.name+"_full.bak"
    Backup-SQLDatabase -Database $db -ServerInstance $ServerPrimaryName -BackupFile $FileShare+$db.name+"_log.trn"
    Restore-SQLDatabase -Database $db -ServerInstance $ServerPrimaryName -BackupFile $FileShare+$db.name+"_full.bak" -NoRecovery
    Restore-SQLDatabase -Database $db -ServerInstance $ServerPrimaryName -BackupFile $FileShare+$db.name+"_log.trn" -NoRecovery
}

#Now we need to create a TCP Mirroring EndPoint on each Server

$EPName = "DBMirror-PS"
$EPPort = 7022

        $PrimaryEP = New-Object -TypeName Microsoft.SqlServer.Management.Smo.EndPoint($ServerPrimary, $EPName)
        $PrimaryEP.ProtocolType = [Microsoft.SqlServer.Management.Smo.ProtocolType]::Tcp
        $PrimaryEP.EndpointType = [Microsoft.SqlServer.Management.Smo.EndpointType]::DatabaseMirroring
        $PPrimaryEP.Protocol.Tcp.ListenerPort = $EPPort
        $PrimaryEP.Payload.DatabaseMirroring.ServerMirroringRole = [Microsoft.SqlServer.Management.Smo.ServerMirroringRole]::Partner
        $PPrimaryEP.Create()
        $PrimaryEP.Start()

        $SecondaryEP  = New-Object -TypeName Microsoft.SqlServer.Management.Smo.EndPoint($ServerPrimary, $EPName)
        $SecondaryEP.ProtocolType = [Microsoft.SqlServer.Management.Smo.ProtocolType]::Tcp
        $SecondaryEP.EndpointType = [Microsoft.SqlServer.Management.Smo.EndpointType]::DatabaseMirroring
        $SecondaryEP.Protocol.Tcp.ListenerPort = $EPPort
        $SecondaryEP.Payload.DatabaseMirroring.ServerMirroringRole = [Microsoft.SqlServer.Management.Smo.ServerMirroringRole]::Partner
        $SecondaryEP.Create()
        $SecondaryEP.Start()

foreach ($db in $Databases){
    $db.MirroringPartner = "TCP://"+$ServerPrimary.NetName+":"+$EPPort
    $$db.alter()
    $ServerSecondary.Databases.item($db.name).MirroringPartner = "TCP://"+$ServerSecondary.NetName+":"+$EPPort
    $ServerSecondary.Databases.item($db.name).alter()
}

And this certainly speeds things up if you need to set up a large number of mirror dbs in one fell swoop

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is availableĀ here.

Tagged ,

Day 27 of 31 Days of SQL Server Backup and Restore using PowerShell: Why the Long version?

You might have noticed over the last 26 days that I seem to prefer using the long form of PowerShell backups over the more concise cmdlet versions.

There’s real reason for this, as they’re all building up the same T-SQL BACKUP statements at the end of the day. However, I find that when automating things, building up a backup one option at a time like:

$Restore.Replace=$True
$Restore.NoRecovery = $False

This gives me very easy control of what gets passed in to my backup or restore.:

if ($x -eq y){
    $Restore.Replace=$True
}else{
    $Restore.Replace=$False
}

and so on for all the parameters you might want to add.

But with the newer Restore-SQLDatabase cmdlet you can’t use:

if ($x -eq y){
    $RecoveryOption = $True
}else{
    $RecoveryOption = $False
}
Restore-SQLDatabase -Database $DBName -BackupFile $backupfile -NoRecovery $RecoveryOption

As -NoRecovery is a switch, not a parameter. This means that just specifying the switch turns the feature on or off, there is no actual parameter to pass through to it. This can be a bit limiting when you’re wanting complete control within a flexible set of scripts.

I find the new cmdlets are much easier when you just want to quickly back something up from the command line. Today I used this 2 line to quickly backup all the databases on a dev/test server before we tried something destructive:

cd SQLSERVER:\SQL\DEVBOX1\DEFAULT\DATABASES
ls|%{$_|Backup-SQLDatabase}

Much simpler and quicker to type than anything else I can think of!

(Just in case you can’t read what is a great example of bad code, I’m making use of a couple of shortcuts in PowerShell:
ls is a system alias for Get-ChildItem, which coming from a Unix background I tend to use naturally. And % is an alias for foreach, and doing that on a collection of system objects defaults to stepping through the objects in the collection loading $_ on each iteration. Plus PowerShell doesn’t mind about missing spaces if things aren’t ambiguous. This type of code is great for one offs when you just want to do something quickly, but not something you ever want leave in proper scripts or functions, and is not fun to discover when debugging as it probably means shortcuts were also taken elsewhere!)

So to answer the original question, it’s just down to a personal preference, and using the right tool at the right time.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is availableĀ here.

Tagged ,