Musings of a Data professional

Stuart Moore

Category: Uncategorized Page 3 of 4

dbatools’ Copy-DbaDatabase and Start-DbaMigration now supporting Azure Storage and Azure Managed Instance

Pleased to announce that we’ve now added Azure storage support to all the dbatools commands that copy databases around. This is part of our march towards full support for migrating on premise SQL Instances to Azure SQL Managed Instances

These changes also make it much much simpler to copy or migrate databases across SQL Server instances in different Active Directory domains.

Prerequisites

The features in this post are available in dbatool’s main production branch from version 0.9.807 onwards

You’ll need an Azure Storage account set up, previous blog posts on doing that can be found at:

I’d highly recommend you use the Shared Access Signature method. 2 main reasons:

  • Access Key auth is deprecated in SQL Server 2016+
  • You can’t stripe or use mutiple storage accounts with Access Keys, but you can with Shared Access Signatures

Copy-DbaDatabase to Azure Managed Instance

So now you’ve got credentials on the source and destination instances, and a working storage account you’re ready to use Copy-DbaDatabase to move your database up to a Managed instance.

If you’re using Shared Access Secrets, then it’s a simple as using your storage account URL as the SharedPath:

$copyParams = @{
    Source = "Server1"
    Destination = "MyNewMI.public.cus29s972e4513d6.database.windows.net,3342"
    DestinationCredential = $cre
    SharedPath = "https://azblogdemo.blob.core.windows.net/sql"
    BackupRestore = $True
}
Copy-DbaDatabase @copyParams

The only difference is that you now pass in the URL to your blob storage account rather than a UNC path to SharedPath

If you are using Access Keys, then you will need to pass the name of the SQL credential holding them in via the AzureCredential parameter

$copyParams = @{
    Source = "Server1"
    Destination = "MyNewMI.public.cus29s972e4513d6.database.windows.net,3342"
    DestinationCredential = $cre
    SharedPath = "https://azblogdemo.blob.core.windows.net/sql"
    BackupRestore = $True
    AzureCredential = "azblogdemo"
}
Copy-DbaDatabase @copyParams

All the other parameters you’d use with Copy-DbaDatabase other than ReuseSourceFolderStructure are available. That one’s not used as Managed Instances don’t allow user to specify paths

You may still get errors when copying certain databases. At the moment we’re not checking all the internal features of a database to make sure they are compatible with Azure SQL Managed Instances.

Start-DbaMigration with Azure Managed Instances

With Copy-DbaDatabase now supporting Managed Instances we can also start making Start-DbaMigration supporting them as well.

This is a preview of where we’re going as this particular command has more moving pieces that we need to make sure are properly compliant with the differences between on premise instances and Managed Instances as linked to above.

The sections working so far are:

  • Copy-DbaDatabase, as explained above
  • Copy-DbaLogin, this will copy SQL Server logins across but not Windows logins as they are not supported in MI yet

Bonus news for cross domain transfers

A common query on the #dbatools channel on SQL Community is how to copy a database or migrate an instance between 2 SQL Instances in different Active Directory domains with no trust relationship. The SQL Server connection is fairly simple as we can use SQL Server Logins, but the shared storage is trickier.

Permissions have to be very loose, and deciding how to sort the networking out can take a while

The good news is that using Azure Storage get’s around all of those issues. Create yourself an account, create the credentials on the source and destination instances and you’re good to go.

It’s all secure. The backups are encrypted on Azure Storage, and are only accessible to those people you allow (even Microsoft can’t see them unless you give them your keys). All the file transfer is done under 2048 bit SSL encryption. Which makes it safer than having a non domain joined sever with a share opened up to everyone/everyone

Wrap up

This is iteration one in a process that we hope will make migrating to a Managed Instance as easy as we’ve made migrating across SQL instances in your own Data Centres.

Please feedback any errors or issues you come across, and we’d love to hear you feedback and ideas for improvements..

Feel free to leave them in the comment below, or drop them in to the #dbatools channel on SQL Community, or raise an issue on Github

Fixing things when you push changes to the wrong branch on Github

If you’re in a panic there’s a TL;DR version at the bottom of the page

We’ve all done it. Working for ages tracking down that elusive bug in a project. Diligently committing away on our local repo as we make small changes. We’ve found the convoluted 50 lines of tortured logic, replaced it with 5 simple easy to read lines of code and all the test have passed. So we push it backup to github and wander off to grab some a snack as a reward

Halfway to the snacks you suddenly have a nagging doubt in the back of your mind that you don’t remember starting a new branch before starting on the bug hunt.

Snack forgotten you hustle back to your desk, mistype your password twice as you try to login, and there it is. You’ve pushed directly into the main branch!

Fear and embarassment kick in. Hell, how am I going to fix this! And how much of an idiot am I going to look!

Been there, done that, got enough T-Shirts I really don’t need to buy clothes.

First off, don’t panic. You aren’t the first, and you won’t be the last. We’ll walk through some simple steps to recover from this moden faux pas (I don’t think Debrett’s cover this situation yet, but if they’d like to I’m happy to syndicate)

First off, jump on Slack/Teams/Email or whatever you use to communicate across the project to let everyone know what you’ve done. It’s much less embarrassing to do it now that when someone’s forked off of your unchecked code, or checks something in and merges with it

Now you need to get your commits out of main branch. As these have been committed there’s 2 git options to look at, revert or reset. As this is a public SNAFU then revert is the correct way to do this

A git revert creates a commit that undoes the commits you’re reverting so everything is logged in the history.

git reset will remove the evidence it ever happened from the commit history, which makes it hard to pinpoint errors if you don’t get it perfectly right

This doesn’t mean a git revert isn’t without it’s issues or potential pitfalls. But, because you alerted everyone in step 1 (you did fess up didn’t you?) you’ve got some breathing space, so take it slowly and don’t screw it up again

First you need to work out how far back you need to revert to. Your options are to revert a set number of commits or to revert to a specific commit

To revert a x commits, you’d use this syntax

git revert HEAD-x

So to revert 2 commits you’d use:

git revert HEAD-2

But how would you know how many commits to revert? That leads into the next bit, working out which commit to revert to. There are 2 options here, github or git log

Getting Commit ID from git log

git log is the more complex way of doing this, but is much more flexible. And if you’re like me and spend most of your git time on the command line then it saves finding the mouse and a different window. We’ll look at 2 command line options that should be enough to sort out anything but the worst messages

Running git log on it’s own produces a lot of verbose output:

git log showing verbose output

That’ll keep paging through as much history in that branch as has been kept. To exit just hit q

For some brevity we use the –oneline switch to just return the basic facts:

Using the oneline switch with git log

This is much easier to read, but still returns everything in the history. So let’s reduce that with the -x parameter, where x is an integer saying how many lines you want returning:

restricting the number of row git log returns

Hopefully if you’ve caught the issue quickly and warned everyone else of the issue you won’t have too many commits you need to scan. For the revert you need enough of the Commit ID to be unique, which –oneline gives you

Getting Commit ID from Github

As you’ve pushed the commits to github, github offers a nice way to see the commit history. Below is a snapshot of a repository I’ve just had to revert because I wasn’t paying attention.

To get the commit history, click on the highlighted number of commits.

Where to find git commit history on github

This pops up a history list of all the commits with their commit hash:

git history in github

There’s a handy clipboard button next to each commit to grab the Commit ID for the revert

Reverting the Commits

Now we need to revert the unwanted commits. There are 2 options. You can revert a specific number of commits from the current HEAD of the branch. For example, to revert the last 2 commits in the branch you’d use:

git revert HEAD-2

or you can revert to a specific commit id

git revert 93cd242

Personally, I always go with the commit hash. When you’re in a bit of a panic the first thing that goes for me is counting, so having an exact value to revert to saves a lot of double checking

And you can revert commit by commit if you wanted. In the example I’ve been using here I wanted to make sure exactly what was being taken out of files as I went along. So if you look a the git history from after I’d fixed things you can see 2 reverts at the top of the history

Git log history showing revert results

So now you’re reverted your local repository, the next thing you need to do it to push the changes back up to the github repository. I know you know how to do this as that’s what caused this issue in the first place!

Finally, let everyone know that they can use the main branch again and make a not to check in future

TL;DR

Let everyone know the branch is not right to stop any pulling/pushing/branching from it

Then get the commit history:

git log --oneline -20

Then revert to the CommitID before the wrong commits:

git revert <CommitID>

Finally push everything back to the origin:

git push 

Let everyone know the main branch is safe to use again.

Incoming for 2019

It’s the start of a new year, so time to look at the things I’ve got coming up:

Personally

Just started booking things in for 2019, but already got a couple of things booked in that I’m really looking forward to:

SQLBits 2019

I’ve been going to SqlBits for years, and this year I’ve been lucky enough to be picked to speak. I’ll be presenting my session ‘Slack for the DBA’, about why and how you can use ChatOps techniques in your day to day database support to speed things up.

Not sure which day I’ll be speaking on, but I’ll be there for Friday and Saturday if you want to say hi. Hopefully there for the traditional 5k SQL Run on the Friday morning as well

SQL Saturday Iceland

As I’ll be passing over on the way to the next event I thought it’d be rude not to stop off for this as well. Plus an excuse to finally visit Iceland

MVP Global Summit

The yearly gathering of Microsoft MVPs. Really looking forward to this. Meeting other MVPs, getting to talk to the MS product teams, and getting some in depth information.

Others

Just starting to look at other events, and going to be booking in some Usergroup speaking slots as well. So if you’re looking for a Data Platform or PowerShell speaker for you user group then please get in touch

Nottingham SQL Server User Group

The Nottingham SQL Server User group is back for its bi-monthly meetings in 2019. The dates for 2019 are:

All the dates link to the individual meetup events where you can find more details on that evenings happenings. We’re picking up some quality speakers and topics for 2019. We welcome anyone who works with the MS Data Platform from beginner to expert, and we’re keen to hear from all speakers from first timers to those who’ve been around the international block 😉

Nottingham PowerShell Usergoup

And the Nottingham PowerShell Usergroup is back for 6 meetings in 2019 as well. This will be our second year hosting these event, and we’ve gradually built up a strong core audience, but we’d love to meet more PowerShell people. As normal we’ll be covering a wide range of topics that will appeal to all sorts of PowerShell users. The dates for 2019 are below, each being a link to the meetup for that date:

Again it’d be great to hear from you if you’d like to speak. Any PowerShell topic is more than welcome, and I’m more than happy to help you work on any ideas you have and get you started with your speaking career

PsConf.Eu – a quick review

This was my second trip to Hannover for PsConf.EU, a 4 day conference covering every corner of PowerShell.

Sessions

Let’s start with the most important part of a conference, the content. Just a quick scan of the Agenda gave a hint of the quality lined up. With sessions from the PowerShell Dev team, and deep dives from community maestros there was plenty of highly technical content, but mixed up with some easier intros to new topics.

For me some of the highlights were getting a first view of some of the new features coming out from the DSC team for the LCM previews straight from the horse’s mouth (well, <a href=’https://social.msdn.microsoft.com/profile/Michael+Greene>Michael Greene the team leader), more details here. And the look at Pester internals from Jakub Jares (b | t) was good as well, now feel I know a lot more about the underpinning, and the reasons behind some of the ways you have to use it make much more sense.

No bad sessions from my point of view. A few issues with demos. Just from my point as a speaker, if you’re going to run a fragile multi machine demo, record a good version ahead of time. While it’s always good to do a demo live, with those setups once it’s gone wrong its’s very hard to recover.

The days run long, 08:30 – 20:00 on the wednesday with the 3 hour workshops at the end. For me this a bonus. When I’m at a conference I like to concentrate on the learning. Some conferences I’ve been to it’s felt as those you spend almost as much time at lunch or coffee breaks as you do in the sessions. So while there were long days it felt worth it.

The venue is well lit, the AV is great, even from the back of the room the screens were clear and the PA carrried the speaker right back. The rooms were closer together this year, and easier to navigate between. And despite the temperature being a step up from the UK the rooms were kept a pleasant but not overly cold temperature. Plenty of food and drink, the lunch menus weren’t holding anything back.

Community

I’m lucky enough to know quite a few PowerShell community members already, but met plenty more at PSConf.EU. Everyone’s there for the same reason so it was always easy to spark up a conversation with someone, whether sat next to them waiting for a session or in the queue for lunch. Was good to see the number of PS UserGroups that were teased out over a lunchtime breakout session, hopefully it’ll be up on the web shortly.

Party

Tuesday night was party night this year. Back to the Hannover Zoo for an evening of catching up and making new friends. The boat is pushed out with the venue, the demonstration from the Zoo (this year Polar Bear feeding), free food and beer. Don’t think anyone went hungry or thirsty that night.

Logistics

From the UK this is a pretty easy conference to get to. From Nottingham I took the train down to Heathrow, a 1 hour hop over with British Airways and a 20 minute train journey saw me in the centre of Hannover. Taxis are about ¢12 from the station to the Conference hotel, but as I’d spent a lot of time sitting down I decided to take the 25 minute walk across.

Accomodation

This year I stayed in the conference hotel the Congress Hotel Am StadtPark next door to the main conference venue. Good hotel, room was a decent size and had all the usual features. The bar was reasonably priced and usually full of PowerShell geeks if you wanted a chat over beers. Restaurant was good, but not cheap. The only downside was flaky WiFi, but that improved on Saturday once all the phones, tablets and laptops from the conference had left for some reason…..

Another great year from Tobias and his team. They’ve announced the dates for next year, 4th-7th June 2019 and they’ve been written into the calendar with ink as I’ll definitely be heading back.

Changes to SQL Relay session selection process 2018

This year the SQL Relay committee has decided to make a couple of changes to our session selection procedure.

Our aim this year is to be as inclusive and transparent on our process as we can manage. To this end, this post is going to lay out our aims and how we plan to achieve them.

We want everyone to feel they can submit without worrying that their submission will be overlooked due to Gender, Disability, Availability to do the whole tour or any other reason you may be concerned about.

To do this we are moving to a blind selection process. I (Stuart) will be the only member of the committee who can see the list of submissions. At the end of the Call for Sessions I will release only the session abstracts to the rest of the committee members, who will then choose the sessions based only on the abstracts.

Then, should we have openings due to people’s availability we will go through again to fill in any holes in the agenda.

If you require any special requirements then please add them to your submission. They won’t be used during the selection process, but will allow us and our venues to ensure we cover them
The Call for Papers is available here – SQL Relay 2018 CfP

If you are worried about any other reasons you may feel that your submission may be overlooked and want to talk them through, then please contact me (comments, Contact Form or on Twitter (@napalmgram). Anything you ask will be treated in confidence.

If you would like some help with or an independent review your abstract then please get in touch with Speaking Mentors where some of the best SQL Session submitters can help you out.

New year, New speaking dates

Lining up a few SQL Server Usergroups speaking sessions for the year already:

All sessions will be:

Indexing Nightmare – Cut Through the Clutter

Inherited a database with 30 indexes on every table? Has the vendor for your 3rd party app recommended more indexes over the years than you can remember? Got indexes that were added to fix a data load problem 4 years ago, but not sure if they’re still being used? Indexes are key to SQL Server performance, but like everything too much of a good thing is a bad thing. In this sessions we’ll look at how you can analyse your current indexes with the aim of consolidating them into useful ones, even removing some completely and how to improve the ones you’ve got left

Except for Southampton, where it’ll be:

Get on the Bus

As time goes by, more systems are crossing hosting boundaries (On Premises, Azure, multi-cloud provider, ISVs). We need a simple reliable mechanism to transfer data between these systems easily, quickly and reliably. Microsoft have taken their Message Bus technology and moved it to the cloud as Service Bus. This session will introduce you to this service and give examples of how internal databases can safely process data from cloud hosted applications without having to be exposed to the InterTubes. Examples are predominantly .Net C#, but aren’t complex!

Dates

Nottingham SQL Server Usergroup – 12th January 2017
(Also presenting will be Steph Middleton, talking about Building a Robust SSIS Solution)
(More details and registration here

Midlands/Birmingham SQL Server Usergroup – 19th January 2017
More Details and Registration here

SQL Surrey Server Usergroup (Guilford) – 20th February 2017
Link and details to be confirmed

Southampton SQL Server Usergroup – 1st March 2017
More details and registration here

Hope to see some of you there. And if there’s any other usergroups out there that are looking for speakers then let me know, have presentations on SQL, Powershell and general IT process to offer.

PowerShell module to automate restoring SQL Server backups

PowerShellI’ve long been a proponent of automatically restoring as many of your SQL Server backups as you can. There are many benefits of doing this, amongst them:

  • Checking your backups are working, I’ve seen people with backups they can’t restore
  • Knowing how long it will take to recover, your boss isn’t going to be happy if they’re still waiting 3 hours after you said the business would be back up and running
  • Having logs of having restore them, Auditors love this sort of thing
  • Peace of mind knowing that your restores are going to work when the brown stuff hits the reciprocating draught machine.

I’ve been using a collection of scripts and functions based on my previous blogs and presentations on the subjects. But they’ve never really been in a state that I felt comfortable sharing with the world.

Until now:

31brps

Day 7 of 31 days of SQL Server Backup and Restore using PowerShell: Asynchronous backups – Part 1

So far all of the backup scripts we’ve looked at having been firing off backups synchronously, that is each one happens after the previous one has completed. This is the same as happens with SQL Server Maintenance Plans.

Within PowerShell we have 2 ways of submitting multiple backup jobs at the same time, each have their own advantages and drawbacks.

Method 1 – SMO’s SQLBackupAsync method and PowerShell Events

SMO’s backup object offers it’s own SQLBackupAsync method which can be simply fired off. This can also be extended with PowerShell’s Event subsystem to allow you to capture the status

Method 2 – PowerShell Jobs:

PowerShell 3 introduced Jobs into it’s bag of tricks. These allow you to fire out a block of script, allow it to run in the background while doing other work and then pick up the results at a later point. Jobs are simpler to work with, and allow you to group elements of work that have to happen together (ie; if you have 2 dbs that must backup together or need to make sure a filesystem backup happens at the same time as the SQL Server backup.

Today we’ll look at Method 1, and then move onto Method 2 tomorrow.

So without further ado, here’s the basics of of an asynchronous backup script:

import-module "SQLPS" -DisableNameChecking 
$ServerName="Server1" 
$BackupBath="c:psbackups\" 
$SQLSvr = New-Object -TypeName Microsoft.SQLServer.Management.Smo.Server($ServerName)
$Db= New-Object -TypeName Microsoft.SqlServer.Management.Smo.Database
$Backups=@()
$i=0 
foreach($Db in $SQLSvr.Databases | Where-Object {$_.Name -ne "tempdb"}){
	$btmp=@{}
	$BackupConn= New-Object -TypeName Microsoft.SQLServer.Management.Smo.Server($ServerName)
	$Backup= New-Object -TypeName Microsoft.SqlServer.Management.Smo.Backup
	$DeviceType= [Microsoft.SqlServer.Management.Smo.DeviceType"]::File
    $BackupName = $BackupPath+"\"+$Db.Name+"_"+[DateTime]::Now.ToString("yyyyMMdd_HHmmss")+".bak"
    $Backup.BackupSetDescription = "Full Back of "+$db.Name
    $Backup.database = $Db.name
    $BackupDevice = New-Object -TypeName Microsoft.SQLServer.Management.Smo.BackupDeviceItem($BackupName,$DeviceType)
    $Backup.Devices.Add($BackupDevice)
    $btmp["backup"]= $Backup
    $btmp["connection"] = $BackupConn
    $backups += $btmp

    $backups[$i]["backup"].SqlBackupAsync($backups[$i]["connection"])
    $i++
}

This script will perform a full database backup of each databases on the specified server except for TempDB. But, it’s the PowerShell equivalent of throwing something and not being worried about where it lands or if it shatters.

Working through the script:

It starts off normally with loading the SQLPS module, setting the now usual variables, and creating the familiar Database and Server Connection objects.

Then we have this line

$Backups=@() 

For those of you not familiar with PowerShell, this is creating a new Array variable called $Backups, this because for Asynchronous backups we need to hold a connection object for EACH backup. Up till now, as we’ve been submitting the backups synchronously they are are queued up and submitted one by one, so they can use the same connection. As we’re submitting these asynchronously they need their own connection. The simplest way to keep track of which connection goes with which backup is to track them all in a multidimensional array, and we build the first dimension here.

My old friend the counter $i is also set to 0, this will be used to keep track of our array elements.

Then we drop into our normal ForEach loop through the databases on our server.

First we build a temporary hash $btmp which will hold the backup information before we add it to $Backups. If you’ve not met a PowerShell Hash before, it’s an associative array, so it holds data like:

$hash["author"]="stuart"
$hash["topic"]="PowerShell"

And since this is PowerShell, each Value can in fact be an object. Which is exactly what we’re going to use it for

We build a new connection object for our backups called $BackupConn, and then a Backup object as normal. We then add these to our $btmp hash like follows:

$btmp["backup"]= $Backup
$btmp["connection"] = $BackupConn

And then we append this to our $Backups array.

As demonstrated in the next line when we actually submit the backup:

$backups[$i]["backup"].SqlBackupAsync($backups[$i]["connection"])

We can reference our Backup and it’s specific connection by referencing the index of our $Backup array, and then referencing the contained hash.

If you’re backing up enough databases with enough data you’ll have time to switch to SQL Server Management Studio and run sp_who2 to see all the connections and backup processes running.

All the backups will now be submitted on their own connection and SQL Server will process them whichever order it wants. That last point is very important to remember. When running backups asynchronously there is no guarantee in which order the backups the will be run, so if you need to ensure a certain order you’ll either need to stick to running them synchronously or look at the PowerShell Job methods in tomorrows post which allows you to batch them up.

Now, this is all great, but as mentioned at the start of this blog we’re just throwing the backups at SQL Server, and have no way of knowing what’s completed. So we’ll know add some extra code into to the script so we can get this information back.

To do this we’re going to be using PowerShell Events. Many PowerShell methods expose events which can fire at certain points during the execution of the method. Your script can poll for this event, and then take appropriate action when they fire.

For this script we’re going to register the following 2 events:

    Register-ObjectEvent -InputObject $backups[$i]["backup"] -EventName "Complete" -SourceIdentifier "asyncbkup-complete-$(db.name)"  -MessageData "$($db.name)-$i" | Out-Null
    Register-ObjectEvent -InputObject $backups[$i]["backup"] -EventName "Information" -SourceIdentifier "asyncbkup-info-$($db.name)"  -MessageData "$($db.name)-$i" | out-null

These events fire when our Backup completes, or when it wants to return some other informational message (ie; an error). In each case we register the Event against the specific backup object using our multidimensional array $backups, specify the event we want to catch, give this particular event an identifier to distinguish it from all other events and then give it a ‘payload’ which we can get later.

We then need to poll the events till we get one of ours. There are a number of ways this loop can be run, either as a continuous while loop or from a Timer which fires a regular intervals.

In this example we’re going to add a “complete” value to our hash which we’ll initially set to 0. As each backup completes (successfully or unsuccessfully) we’ll set the value to 1. If we sum all the “complete” values in our $backups array, when it’s equal to the number of entries in the array then we know that all the backups have finished and we can exit the loop.

While in the loop we use the Wait-Event cmdlet to catch any events. We use Tee-Object to get the information from the event but still allow us to pass it to Remove-Event, we need to do that to remove the Event from the queue to prevent processing it repeatedly.

Then we’ll split the payload into an array index and the Database Name, pull the ExecutionStatus from the Backup Object firing the event and get the Error message (if anything). Then using a case statement to catch the possible event statuses we build up a $output to record the information, and we mark the appropriate records as completed.

After the loop exits we clear all Event registrations, and output the message we built up:

Import-Module "SQLPS" -DisableNameChecking

$ServerName = "Server1"
$BackupBath = "c:\psbackups\"

$SQLSvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server($ServerName)
$Db = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Database

$Backups = @()
$i=0

foreach ($Db in $SQLSvr.Databases | Where-Object {$_.Name -neq "Tempdb"}){
    $btmp = @{}
    $BackupConn = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server($ServerName)
    $Backup = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Backup
    $BackupName = $BackupPath+"\"+$Db.Name+"_"+[DateTime]::Now.ToString("yyyyMMdd_HHmmss")+".bak"
    $Backup.BackupSetDescription = "Full Back of "+$Db.Name
    $Backup.database = $Db.Name
    $Backup.Devices.AddDevice($BackupName,'File')
    $btmp["backup"]= $Backup
    $btmp["connection"] = $BackupConn
    $backups += $btmp

   Register-ObjectEvent -InputObject $backups[$i]["backup"] -EventName "Complete" -SourceIdentifier "asyncbkup-complete-$($Db.Name)"  -MessageData "$($Db.name)-$i" | Out-Null
   Register-ObjectEvent -InputObject $backups[$i]["backup"] -EventName "Information" -SourceIdentifier "asyncbkup-info-$($Db.Name)"  -MessageData "$($Db.Name)-$i" | out-null

    $backups[$i]["backup"].SqlBackupAsync($backups[$i]["connection"])
    $i++
}

$output = ""

while(($backups[0..$backups.length].Complete | Measure -sum).sum -lt $backups.Length){
    Wait-Event | Tee-Object -variable event | Remove-Event

    $ErrorText = $event.SourceEventArgs.Error.Message
    $DbName,$i = $event.MessageData.Split('-')

    $status = $event.SourceArgs[0].AsyncStatus.ExecutionStatus

    if($backups[$i]["Complete"] -ne 1){
        $now = get-date
    	switch ($status){
    	    "Succeeded" {
    		$output += "`n $DbName Completed at $now"
    		$backups[$i]["complete"] = 1
                break;}
    	    "Failed" {
    		$output += "`n $DbName Failed at $now with $ErrorText"
    		$backups[$i]["complete"]=1
    		break;}
    	    "InProgress"{
                $output += "`n $DbName In Progress at $now"
    		break;}
    	    "Inactive" {
    		$output += "`n $DbName Inactive at $now"
                break;}

    	}
    }
}

Get-EventSubscriber | Unregister-Event
write-output $output

As an example of the output from this script, here’s a sample from one of my test boxes where I’d marked one DB as offline:

InternetSales Failed at 09/09/2013 21:32:08 with Database 'InternetSales' cannot be opened because it is offline.
 dbgrow In Progress at 09/09/2013 21:32:09
 dbgrow In Progress at 09/09/2013 21:32:09
 AWDataWarehouse In Progress at 09/09/2013 21:32:09
 AWDataWarehouse In Progress at 09/09/2013 21:32:09
 fg_test In Progress at 09/09/2013 21:32:10
 fg_test In Progress at 09/09/2013 21:32:10
 fg_test In Progress at 09/09/2013 21:32:10
 fg_test In Progress at 09/09/2013 21:32:10
 HumanResources In Progress at 09/09/2013 21:32:10
 dbgrow Completed at 09/09/2013 21:32:10
 fg_test In Progress at 09/09/2013 21:32:10
 HumanResources In Progress at 09/09/2013 21:32:10
 msdb In Progress at 09/09/2013 21:32:11
 model In Progress at 09/09/2013 21:32:11
 AWDataWarehouse Completed at 09/09/2013 21:32:11
 modellock2 In Progress at 09/09/2013 21:32:11
 modellock In Progress at 09/09/2013 21:32:11
 model In Progress at 09/09/2013 21:32:11
 msdb In Progress at 09/09/2013 21:32:11
 psrestore In Progress at 09/09/2013 21:32:12
 modellock In Progress at 09/09/2013 21:32:12
 modellock2 In Progress at 09/09/2013 21:32:12
 ResellerSales In Progress at 09/09/2013 21:32:12
 SomeDB In Progress at 09/09/2013 21:32:12
 SomeDB In Progress at 09/09/2013 21:32:12
 psrestore In Progress at 09/09/2013 21:32:12
 SomeDB In Progress at 09/09/2013 21:32:12
 ResellerSales In Progress at 09/09/2013 21:32:12
 master In Progress at 09/09/2013 21:32:13
 HumanResources Completed at 09/09/2013 21:32:13
 restoretime In Progress at 09/09/2013 21:32:13
 restoredt In Progress at 09/09/2013 21:32:13
 fg_test Completed at 09/09/2013 21:32:13
 master In Progress at 09/09/2013 21:32:13
 Staging In Progress at 09/09/2013 21:32:13
 restoretime In Progress at 09/09/2013 21:32:13
 Staging In Progress at 09/09/2013 21:32:13
 restoredt In Progress at 09/09/2013 21:32:13
 psrestore Completed at 09/09/2013 21:32:13
 msdb Completed at 09/09/2013 21:32:13
 model Completed at 09/09/2013 21:32:13
 ResellerSales Completed at 09/09/2013 21:32:13
 modellock Completed at 09/09/2013 21:32:13
 modellock2 Completed at 09/09/2013 21:32:13
 test1 In Progress at 09/09/2013 21:32:13
 test1 In Progress at 09/09/2013 21:32:13
 Products In Progress at 09/09/2013 21:32:13
 TSQL2012 In Progress at 09/09/2013 21:32:13
 Products In Progress at 09/09/2013 21:32:13
 TSQL2012 In Progress at 09/09/2013 21:32:13
 master Completed at 09/09/2013 21:32:13
 Staging Completed at 09/09/2013 21:32:13
 SomeDB Completed at 09/09/2013 21:32:13
 restoretime Completed at 09/09/2013 21:32:13
 restoredt Completed at 09/09/2013 21:32:13
 test1 Completed at 09/09/2013 21:32:13
 Products Completed at 09/09/2013 21:32:13
 TSQL2012 Completed at 09/09/2013 21:32:13

Showing that we have multiple backups happening at once, with the finish order mostly being determined by the size of the Database and the speed of my test rig’s hard drive.

And if you switch to SSMS while the script is running and look at connectiong/processes you’ll see all the backups working and queueing as well.

PowerShell Events are a powerful topic, and can be used in lots of other SQL Server scripting tasks. I’ve only just covered the tip of them here, a pair of good articles on them are :

Use Asynchronous Event Handling in PowerShell

Manage Event Subscriptions with PowerShell

which are well worth a read.

Tomorrow we’ll take a look at the PowerShell Job way of doing this.

This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is availablehere.

Powershell to move Windows Roles and Features between servers

I’ve been working a lot with building some test and development environments for systems that didn’t have them. And as those ‘unimportant’ features have been missed out, it wasn’t too suprising to find that the Build documentation for the servers was less than stellar.

So, how could I make sure that the test and dev boxes had the same Windows Roles and Features without wasting too much time. Well, PowerShell is quickly becoming my default answer to anything like this. And it turns out there’s a nice 1 liner you can put together to do it as well (well, 1 liner if you’ve already got the module imported 🙂 )

 

Import-Module ServerManager
Get-WindowsFeature | Where-Object {$_.Installed -eq $True} | Install-WindowsFeature -computername devbox

First up we import the ServerManager module, it’s loaded by default if you’re running Windows Server 2012 but it’s always good practice to make sure you’ve got it.

Get-WindowsFeature returns a list of all the available Roles and Features on the server, we filter that with Where-Object so we only have those roles and features that are installed. Then we pass the whole lot to Install-WindowsFeature and tell it to install it onto the remote box.

And the nice feature of Install-WindowsFeature is that it will only install anything that’s missing, so this snippet can be used to check for missing sub features

Page 3 of 4

Powered by WordPress & Theme by Anders Norén