Musings of a Data professional

Stuart Moore

Month: October 2013

Learning to play with SQL Server

play-learn-cat
Ever wonder how some people always seem to know the answer in SQL Server? Or seem to come up with it with a minimum of digging around?

Well,  they’re not necessarily cleverer than you, it could just be that they’re better at playing with SQL Server than you are. Knowing how to quickly and simply build up a test scenario means whenever you want to try something out, or prove a hunch then you can be most of the way to proving something while others are still Googling and Binging away. Or if you’ve found something on the Internet, you can check it does actually do what it claims to. Because as Abraham Lincoln once said:

Blog Copy Editing, a SQL Server Community approach

copy-write-crop

I’m sure I’m not alone amongst SQL Server bloggers in starting to write a new post in a fit of great enthusiasm, but eventually getting bogged down in a mire due to dithering about whether:

  • The point on my post is clear enough?
  • Is the post structured properly?
  • Is my writing clear and understandable?
  • I am using too many colloquialisms?
  • Do my code examples make sense?
  • Are any attempts at humour actually funny?
  • And a whole host of other things I’ve probably not picked up on, because someone hasn’t told me….

Having helped various people over the year proofread documents (My wife’s MA dissertation in English Literature being a highlight) I realise how important it can be to have another pair of eyes look over your work.

That pair of eyes are even better when they can offer usable criticism. So when I proofread my wife’s dissertation I could spot obvious spulling mistkes, but the chances of me spotting a problem when discussing the actions of the  Ranting Poets of the early 80’s was nigh on impossible. And vice versa, whilst an expert on Poetry and English Literature, Charlotte isn’t likely to pick up on my making a mistake whilst discussing PowerShell backups.

So, assuming that there are other SQL Server bloggers out there in the same boat I was considering setting up a mutual help group? We’d offer to help proofread and copy edit each others future posts, offering advice, criticism (Constructive!) and a potential soundboard.

If you are interested, drop a comment below. Depending on the uptake we might need to think of a clever way of managing this, but hey, we’re nothing if not inventive!

 

Speaking at Birmingham and Norwich SQL Relay 2013R2 events

relay-2013R2-logo

I’m happy to announce that I’ll be speaking at 2 of the SQL Relay 2013R2 events happening in November.

If you’ve not been to a SQL Relay event before, these are FREE all day SQL Server training events featuring some of the top SQL Server experts from around the world (and me!). This edition is visiting 10 different locations around the UK over 2 weeks, so there’s bound to be one near you.

I’ll be speaking at Birmingham on 14th November and Norwich on 27th November, presenting on PowerShell for SQL Server backups and restores.

In Birmingham you’ll also get Denny Cherry speaking on Hekaton, the new in memory SQL Server objects coming in SQL Server 2014

And in Norwich, Klaus Aschenbrenner will be presenting. While he’s not announced his topic yet, Klaus is always a great presenter so it’ll be a good ‘un.

Some of the events are filling up fast, so register quickly to make sure of a place: Register for SQL Relay

Hope to see some of you there. Please come up and say Hi if you do.

Day 31 of 31 Days of SQL Server Backup and Restore using PowerShell: Rounding Up

So here we are, post 34 in my 31 post series about using PowerShell to perform SQL Server backups and Restores. We’ve taken a fairly whistle stop tour of the main points. There’s been a couple of omissions, but hopefully I’ve provided you with plenty of examples you can use in your own situations, or at least some ideas about how you can approach automating your restores. I also hope I’ve shown you that automating those restores needn’t be the nightmare you might have been led to believe from reading through some of the dynamic T-SQL out there.

Much of the more advanced uses of these techniques is down to using deeper features of PowerShell rather the SMO features. If you want to learn more about PowerShell then I recommend the following books:

And the following blogs:

And the following blogs:

I have also been presenting on this topic at UK SQL Server User Groups this year. A set of slides, scripts and demos from one at Cardiff SQL User Group from October 2013 are available for download (cardiff-usergroup-powershell-backups-and-restores-01012013).

I’m also consolidating some of my production functions and routines into a PSBackup module, which I’m hoping to put up on GitHub so they can be worked on as a Open project by anyone who wants to contribute. When it’s up I’ll post an announcement.

I hope you’ve enjoyed the posts, and please feel free to leave a comment, drop me a mail or ping me on twitter if you have a questions.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Scripts and Demos from Cardiff SQL Server User Group presentation, 1st October 2013

All the files from my presentation on using PowerShell for SQL Server Backups and Restores at Cardiff SQL Server User Group on 1st October 2013.

cardiff-usergroup-powershell-backups-and-restores-01012013

Any problems, just drop me a line

Day 30 of 31 Days of SQL Server Backup and Restore using PowerShell: Recording for the Auditors

One of the reasons I became so keen on automating restores was that I’d regularly get requests from various auditors asking for examples of valid restores carried out in the last 3 months, or wanting me to justify my Recovery Time Objectives, or needing to ensure we had adequate DR testing. And there’s always a manager wanting reassurance that we could meet our SLA commitments.

By automating restores of our major production systems and recording the results, whenever a request came in I could quickly dump the information into Excel for them (remember, it’s not Real Information™ unless it’s in Excel).

So what sort of information should be be audited about restores? I find the following are a minimum that cover most requests, though be sure to check for any industry/business specifics that may apply to your own case.

  • Time of restore
    • Restores should ideally be attempted at different times throughout the day. This will highlight and potential slowdowns due to other activity on hardware or network
  • What database was restored
  • Which server was the restore performed on
    • If you have multiple restore/DR servers, it’s important you have a log of testing a restore on all of them to avoid having to use the one of the set that doesn’t work at a critical point.
  • How long it took
  • How much data was written out
    • This could be the amount of data on disk at the end of the backup, or you could calculate the total throughput of all backup files restored, or both
  • To what size did the database files grow during the restore
    • This may not be the same as the previous metric. This value will also include the empty space within data files, and accommodate any ‘shrinks’ that happened during the period being restored
  • User running the restore
    • Just so you can recreate any permissions issues
  • Original server of backup
  • Location of all backup files restored
  • Output (or lack of) from DBCC
    • If you’re using NO_INFOMSGS you may still want to log the fact that you had no reported errors, just to record that it had beem run
  • Output from in house check scripts
  • Log of any notifications sent to DBAs for further investigations

Once you have this information you can start to mine it for your own use as well. You can make sure that all your ‘matched’ hardware is actually working at the same speed, check that restoring whilst the network is under normal business load won’t add an extra hour to your RTO.

You can also start looking for trends, are your restores taking a lot longer since the new SAN was put in? or is Server A producing a lot more alerts on checking, perhaps there’s a underlying hardware problem to be investigated there?

A side bonus of this is also that your recovery hardware is being used. Rathe than just being sat there waiting for a disaster you’re actually reading and writing data from the drives. So now at 3am during a panic restore you can also be confident that you don’t have a dead spindle or a flaky drive controller in your server.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Day 29 of 31 Days of SQL Server Backup and Restore using PowerShell: Why I don’t use the checktables SMO method

2 different occasions at SQL Saturday Cambridge 2013 made me realise that I needed to ‘justify’ my use of Invoke-SQLCmd to run DBCC back on Day 21 of 31 Days of SQL Server Backup and Restore using PowerShell: Verfiying a restored database. One was during my session (Slides and scripts here), and the other was during Seb Matthews session where he said to view every use of Invoke-SQLCmd as a failure.

The first thing I don’t like about this particular message is what happens when you check a ‘good’ database:

Import-Module sqlps -DisableNameChecking

$sqlsvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server("Server1")

$db = $sqlsvr.Databases.item("GoodDB")

$db.CheckTables("None")

If you run this, you’ll notice you get nothing back, not a thing. This is because CheckTables runs with NO_INFOMSGS as default. And there’s no way to override it. This might work for some applications, but in some environments you would want to have an informational messages returned and recorded, as proof that all checks were ran and passed.

That’s strike 1 for me.

If you’re wondering what the parameter to CheckTables is, it’s the repair level. Accepted values are:

  • None
  • Fast
  • Rebuild
  • AllowDataLoss

Which operate in the same way as they do under T-SQL

If you’re lucky enough not to have a corrupt database to play with, then Paul Randal has some you can download: (Paul Randal, SQLSkills.com, Corrupt SQL Server Databases). Assuming you’ve restored the db as chkDBCC then you can check it as so:

Import-Module sqlps -DisableNameChecking

$sqlsvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server("Server1")

$db = $sqlsvr.Databases.item("chkDBCC")

$db.CheckTables("None")

This time we will get some output, unfortunately this time a not hugely useful generic error message:

Exception calling "CheckTables" with "1" argument(s): "Check tables failed for Database 'chkdbcc'. "
At line:5 char:1
+ $db.CheckTables("None")
+ ~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [], MethodInvocationException
    + FullyQualifiedErrorId : Fai

Strike 2

We can improve on this is we actually examine the error in a bit more detail by putting it in a Try, Catch block:

Import-Module sqlps -DisableNameChecking

$sqlsvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server("Server1")
$db = $sqlsvr.Databases.item("chkdbcc")

try{
    $db.CheckTables("None")
}
catch{[System.Exception]
  $err = $_.Exception
  while ( $err.InnerException )
    {
    $err = $err.InnerException
    write-output $err.Message
    };
}

And now we get a more informative response:

Check tables failed for Database 'chkdbcc'.
An exception occurred while executing a Transact-SQL statement or batch.
Check Catalog Msg 3853, State 1: Attribute (object_id=1977058079) of row (object_id=1977058079,column_id=1) in
 sys.columns does not have a matching row (object_id=1977058079) in sys.objects.
Check Catalog Msg 3853, State 1: Attribute (object_id=1977058079) of row (object_id=1977058079,column_id=2) in
 sys.columns does not have a matching row (object_id=1977058079) in sys.objects.
CHECKDB found 0 allocation errors and 2 consistency errors not associated with any single object.
CHECKDB found 0 allocation errors and 2 consistency errors in database 'chkDBCC'.

To get the full information returned in our previous examples:

Error       : 8992
Level       : 16
State       : 1
MessageText : Check Catalog Msg 3853, State 1: Attribute (object_id=1977058079) of row
              (object_id=1977058079,column_id=2) in sys.columns does not have a matching row
              (object_id=1977058079) in sys.objects.
RepairLevel :
Status      : 16
DbId        : 23
DbFragId    : 1
ObjectId    : 0
IndexId     : -1
PartitionId : 0
AllocUnitId : 0
RidDbId     : 23
RidPruId    : 0
File        : 0
Page        : 0
Slot        : 0
RefDbId     : 23
RefPruId    : 0
RefFile     : 0
RefPage     : 0
RefSlot     : 0
Allocation  : 1

Well, we can’t. As well as enforcing the NO_INFOMSGS clause, CheckTables doesn’t allow the use of TABLERESULTS.

Which is strike 3, and it’s out of here!

This is all personal opinion, but this is one of the very few DBA tasks where I will always opt to use Invoke-SQLCmd over an SMO object.

Hopefully we might see a better SMO option with the release of SQL Server 2014…..

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Day 28 of 31 Days of SQL Server Backup and Restore using PowerShell: Setting up Database Mirroring

A very practical use for automated SQL Server backups and restores with PowerShell is setting up High Availability solutions. Usually these will involve bringing 2 or more SQL Server instances into sync with each other, the usual solution being backups and restores. SQL Server Mirroring may have been deprecated in SQL Server 2012 in favour of AlwaysOn Availability Groups, but I still see plenty of installs that haven’t migrated yet so it’s still heavily in use. To setup SQL Server mirroring successfully there are a number of conditions that must be fulfilled.

  • All SQL Server instances should be of the same version. While you can set up mirroing from lower version to a high version (ie; 2008 -> 2008R2), if you fail over you get a free database upgrade thrown in and can never mirror back.
  • All SQL Server instances should be of the same edition (Enterprise, Standard, etc). You can get away with this if you are NOT running any edition specific features, but the minute you enable an “Enterprise” feature then your database won’t start on any instance that doesn’t support that version
  • Instances should have the same start ‘Log Sequence Number’ in their active transaction log. This ensures that all the ‘new’ transaction transmitted in the intial sync are working from the same start point. Prior to SQL Server 2008R2 you could just restore a current backup, now you need to restore a full backup and at least 1 transaction log
  • There should be a valid TCP mirroring endpoint on each instance
  • There should be a valid SQL Login with permissions to access the above Endpoints

All of this we can handle within PowerShell to make our lives nice and easy. The following is going to assume that you have a shared folder which both SQL Server instances have access to. If not, then you’ll just need to use Copy-Item cmdlet to manually copy the backup files across. We’re also going to assume that both instances are running under the same domain accounts so there’s no need to grant permissions on the endpoints

Import-Module SQPS -DisableNameChecking

#Setup some basic variables
$ServerPrimaryName = "Server1"
$ServerSecondaryName = "Server2"

#Let's do multiple databases
$Databases = ("db1","ProdDB2", "MirrorDB1")

#Both SQL Server instances should have read/write to here
$FileShare = "\\FileServer\DBShare\"

#Get all dbs onto the same log:
foreach($db in $Databases){
    Backup-SQLDatabase -Database $db -ServerInstance $ServerPrimaryName -BackupFile $FileShare+$db+"_full.bak"
    Backup-SQLDatabase -Database $db -ServerInstance $ServerPrimaryName -BackupFile $FileShare+$db+"_log.trn"
    Restore-SQLDatabase -Database $db -ServerInstance $ServerPrimaryName -BackupFile $FileShare+$db+"_full.bak" -NoRecovery
    Restore-SQLDatabase -Database $db -ServerInstance $ServerPrimaryName -BackupFile $FileShare+$db+"_log.trn" -NoRecovery
}

#Now we need to create a TCP Mirroring EndPoint on each Server
$ServerPrimary = New-Item $ServerPrimaryName
$ServerSecondary = New-Item $ServerSecondaryName
$EPName = "DBMirror-PS"
$EPPort = 7022

        $PrimaryEP = new-object ('Microsoft.SqlServer.Management.Smo.EndPoint -Argument $ServerPrimary, $EPName
        $PrimaryEP.ProtocolType = [Microsoft.SqlServer.Management.Smo.ProtocolType]::Tcp
        $PrimaryEP.EndpointType = [Microsoft.SqlServer.Management.Smo.EndpointType]::DatabaseMirroring
        $PPrimaryEP.Protocol.Tcp.ListenerPort = $EPPort
        $PrimaryEP.Payload.DatabaseMirroring.ServerMirroringRole = [Microsoft.SqlServer.Management.Smo.ServerMirroringRole]::Partner
        $PPrimaryEP.Create()
        $PrimaryEP.Start()

        $SecondaryEP  = new-object ('Microsoft.SqlServer.Management.Smo.EndPoint -Argument $ServerPrimary, $EPName
        $SecondaryEP.ProtocolType = [Microsoft.SqlServer.Management.Smo.ProtocolType]::Tcp
        $SecondaryEP.EndpointType = [Microsoft.SqlServer.Management.Smo.EndpointType]::DatabaseMirroring
        $SecondaryEP.Protocol.Tcp.ListenerPort = $EPPort
        $SecondaryEP.Payload.DatabaseMirroring.ServerMirroringRole = [Microsoft.SqlServer.Management.Smo.ServerMirroringRole]::Partner
        $SecondaryEP.Create()
        $SecondaryEP.Start()

foreach ($db in $Databases){
    $ServerPrimary.Databases.item($db).MirroringPartner = "TCP://"+$ServerPrimary.NetName+":"+$EPPort
    $ServerPrimary.Databases.item($db).alter()
    $ServerSecondary.Databases.item($db).MirroringPartner = "TCP://"+$ServerSecondary.NetName+":"+$EPPort
    $ServerSecondary.Databases.item($db).alter()
}

As you can see this is pretty simple, and easily reusable whenever you need to set DB Mirroring up. For this example we entered the database names explicitly, but this could be easily modified to mirror every database on an instance with the following changes:

Import-Module SQPS -DisableNameChecking

#Setup some basic variables
$ServerPrimaryName = "Server1"
$ServerSecondaryName = "Server2"
$ServerPrimary = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server($ServerPrimaryName)
$ServerSecondary = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server($ServerSecondaryName)

#Load all the databases in an instance
$Databases = $ServerPrimary.databases

#Both SQL Server instances should have read/write to here
$FileShare = "\\FileServer\DBShare\"

#Get all dbs onto the same log:
foreach($db in $Databases){
    Backup-SQLDatabase -Database $db -ServerInstance $ServerPrimaryName -BackupFile $FileShare+$db.name+"_full.bak"
    Backup-SQLDatabase -Database $db -ServerInstance $ServerPrimaryName -BackupFile $FileShare+$db.name+"_log.trn"
    Restore-SQLDatabase -Database $db -ServerInstance $ServerPrimaryName -BackupFile $FileShare+$db.name+"_full.bak" -NoRecovery
    Restore-SQLDatabase -Database $db -ServerInstance $ServerPrimaryName -BackupFile $FileShare+$db.name+"_log.trn" -NoRecovery
}

#Now we need to create a TCP Mirroring EndPoint on each Server

$EPName = "DBMirror-PS"
$EPPort = 7022

        $PrimaryEP = New-Object -TypeName Microsoft.SqlServer.Management.Smo.EndPoint($ServerPrimary, $EPName)
        $PrimaryEP.ProtocolType = [Microsoft.SqlServer.Management.Smo.ProtocolType]::Tcp
        $PrimaryEP.EndpointType = [Microsoft.SqlServer.Management.Smo.EndpointType]::DatabaseMirroring
        $PPrimaryEP.Protocol.Tcp.ListenerPort = $EPPort
        $PrimaryEP.Payload.DatabaseMirroring.ServerMirroringRole = [Microsoft.SqlServer.Management.Smo.ServerMirroringRole]::Partner
        $PPrimaryEP.Create()
        $PrimaryEP.Start()

        $SecondaryEP  = New-Object -TypeName Microsoft.SqlServer.Management.Smo.EndPoint($ServerPrimary, $EPName)
        $SecondaryEP.ProtocolType = [Microsoft.SqlServer.Management.Smo.ProtocolType]::Tcp
        $SecondaryEP.EndpointType = [Microsoft.SqlServer.Management.Smo.EndpointType]::DatabaseMirroring
        $SecondaryEP.Protocol.Tcp.ListenerPort = $EPPort
        $SecondaryEP.Payload.DatabaseMirroring.ServerMirroringRole = [Microsoft.SqlServer.Management.Smo.ServerMirroringRole]::Partner
        $SecondaryEP.Create()
        $SecondaryEP.Start()

foreach ($db in $Databases){
    $db.MirroringPartner = "TCP://"+$ServerPrimary.NetName+":"+$EPPort
    $$db.alter()
    $ServerSecondary.Databases.item($db.name).MirroringPartner = "TCP://"+$ServerSecondary.NetName+":"+$EPPort
    $ServerSecondary.Databases.item($db.name).alter()
}

And this certainly speeds things up if you need to set up a large number of mirror dbs in one fell swoop

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.

Powered by WordPress & Theme by Anders Norén