Stuart Moore

Musings of a Data professional

Stuart Moore

Category: powershell Page 1 of 5

Making SQL Agent Jobs Availability Group aware with dbatools

A new system has rocked up at work. To keep the database nice and available across a couple of sites we’ve implemented a SQL Server Availability Group solution

The setup for Availability Groups is well documented and dbatools has plenty of AG commands to help out and keep things in sync across the replicas.

But our issue was coping with all the 3rd party SQL Server stored procedures that weren’t Availability Group aware.

What do I mean by Availability Group aware? When running on an Availability Group, one SQL Server instance ‘owns’ the database at any point in time, but the SQL Agent jobs have to be replicated across all of the instances in the cluster. So you want to make sure that your SQL Server Agent jobs only do work on the instance that currently owns the Availability Group.

Doing this is pretty simple. Below is a piece of T-SQL that checks if the current SQL Server Instance is the primary instance in the AG. If it isn’t then we exit with an error.

IF (SELECT 
	repstate.role_desc
        FROM sys.dm_hadr_availability_replica_states repstate 
			INNER JOIN sys.availability_groups ag 
				ON repstate.group_id = ag.group_id AND repstate.is_local = 1) != 'Primary'
    BEGIN
       RAISERROR ('Not Primary', 2, 1)
    END

We exit with an error so we can make use of a SQL Agent Jobsteps ‘OnFailure’ option to quietly exit the job.

Why do we want to quietly exit the job? If we exit with an error, then your monitoring system will hammer you with lots of alerts of regularly failing jobs (you are monitoring your SQL Agent jobs aren’t you?).

As we’re going to be using PowerShell to push this around a lot of jobs, let’s throw it into a variable:

$stepsql = "IF (SELECT 
	repstate.role_desc
        FROM sys.dm_hadr_availability_replica_states repstate 
			INNER JOIN sys.availability_groups ag 
				ON repstate.group_id = ag.group_id AND repstate.is_local = 1) != 'Primary'
    BEGIN
       RAISERROR ('Not Primary', 2, 1)
    END"

Next we’re going to grab all the Agent jobs we want to update. Luckily for me, the company prefixed all of their jobs with a unique stamp, so I just used a filter on the job name:

$jobs = Get-DbaAgentJob -SqlInstance MyInstance | Where-Object {$_.Name -like 'SVC_*'}

To keep things easy to read and save line wrapping, I like to use parameter splatting to keep it clean. So we create a hashtable of values like so:

$jobParameter = @{
    SqlInstance = 'MyInstance'
    StepName = 'AgCheck'
    Database = 'Master'
    Subsystem = 'TransactSql'
    StepId = '1'
    OnFailAction = 'QuitWithSuccess'
    OnSuccessAction = 'GoToNextStep'
    Command = $stepsql
    Insert = $True
}

The Insert switch is new as of 15th October 2019 (I’ve just added it via a Pull Request). When it’s specified the command will insert the new step at the stepid specified. So in this example, it’s going to be the first step executed as the steps start from 1

The Insert switch causes the command to increment the StepID of all subsequent Job steps by 1 so it can fit in. It will also increment the OnFailStep and OnSuccessStep values if the target steps have been moved so the flow isn’t affected.

In this example we set our OnFailAction to be QuitWithSuccess, as mentioned above this will stop our logging system filling up

All that’s left is to loop through all of the jobs in our collection and use New-DbaAgentJobStep to insert it:

Foreach ($job in $jobs) {
    New-DbaAgentJobStep -Job $job @jobParameter
}

To do this across the other Availability Group nodes we have 3 options, we can either modify out hashtable to make use of New-DbaAgentJobStep‘s ability to target multiple SQL Server instances:

$jobParameter = @{
    SqlInstance = ('MyInstance','MyInstance2','MyInstance3')
    StepName = 'AgCheck'
    Database = 'Master'
    Subsystem = 'TransactSql'
    StepId = '1'
    OnFailAction = 'QuitWithSuccess'
    OnSuccessAction = 'GoToNextStep'
    Command = $stepsql
    Insert = $True
}

Or setup and test on a single now, and then use Sync-DbaAvailabilityGroup. This will sync a wide range of objects around an Availability Group (jobs, logins, credentials, custom errors, and many more). If you only want to synchronise the SQL Server agent jobs then Copy-DbaAgentJob will do just that.

Hopefully this little change is going to make a few people’s life easier, it’s certainly done that for me.

PowerShell

Failing a PowerShell Sql Agent step

SQL Server Agent offers some great branching facilities to cope with step failures. These work really well with T-SQL and SSIS steps, but how can you get your PowerShell steps to respond properly to cause a fail?

This question was put to me at SQL Bits over the weekend. The asker was using PowerShell to move files around prior to importing them. They wanted the PowerShell step to report failure properly so their Agent logic would rerun the step

You might want the same response to make sure your monitoring is letting you know when jobs fail. On that note, it would also be nice if you could raise an error or failure message in your PowerShell step and have that propagate back up to SQL Server

Unfortunately the usual scripting standbys of returning 0 or $false don’t work.

The solution is to Throw an error. This gives a simple way to register a failed step and raise an intelligible error back to SQL Server. At it’s most basic it’s just adding this line to your PowerShell:

For example:

Throw "File Not Found"

So in the example I was looking at you might have some like this:

If (-not (Test-Path -Path c:\imports\file.csv)){
    Throw "File c:\imports\file.csv does not exist"
} 
# And the rest of the script goes here

So now if the file is missing, the job step will fail AND you’ll have a nice error message in the job history.

For even move information you can have multiple throws in your script:

If (-not (Test-Path -Path c:\imports\file.csv)){
    Throw "File c:\imports\file.csv does not exist"
} 
$content = Get-Content c:\imports\file.csv -raw
if ($content.length -eq 0){
    Throw "File is empty"
}


So you can really narrow down the reasons for the SQL Agent job step failing.

This technique works with both the PowerShell and CmdExec SQL Agent step methods of running PowerShell scripts.

PowerShell

Converting SQL Server LSNs with PowerShell

If you’ve spent much time playing with SQL Server backups and restores then you’ll have discovered Log Sequence Numbers (LSNs), I’ve discussed before How SQL Server LSNs are generated, and went through how to manually convert a hexadecimal LSN to a numeric value.

As part of some work I was doing on the dbatools module I needed something to convert the Log Sequence Numbers coming back from the transaction log into numeric LSNs so I could use them to query the backup history tables in msdb.

Thinking this might be useful to other people, I’ve chopped out the dbatools specific code, and present the function here:

Standard PowerShell good practice to include a decent help and examples block

I use a couple of regex expressions to make sure we’re getting what we expect. Both are based off of the Microsoft documentation for the LSN formats. So for the numeric LSN we’re looking for 26 numeric digits, and for the hexadecimal LSN representation we’re after 2 8 character hex numbers and a 4 character one separated by colons (‘:’). If the LSN passed in doesn’t match either of those, we raise a warning as we can’t do anything with it.

For hex to numeric, we split the string on :. Convert each section from hex to int64 (using int64 as LSNs can get really big), then convert to a string and pad with the requisite number of 0s.

For number to hex we work from the back of the LSN as we have to cope with potential leading 0’s.

PsDay.UK 2018 incoming

Time sure seems to fly. It’s been just under a year since the first PsDay.UK appeared on the UK PowerShell scene. After the success of that event it’s back for another edition! The 2018 event is on 10th October 2018.

The PsDay.UK team have gathered up a great set of speakers and sessions again, have a look at the quality of the agenda. 3 tracks of sessions, it’s going to be tough picking which one to go to. Well, except at 15:00 when I’ll be in Room 3 (aka Shift) presenting on ChatOps for PowerShell. I’ll be covering what ChatOps offers the PowerShell developer and how you can leverage your current skills and scripts to join the gif filled party

Tickets are availble here – PsDay.UK 2018 Tickets – at a very reasonable price for a full day of quality sessions. There’s even a decent refund programme, so you’re covered if things change.

Hopefully be seeing some of you at Code Node on the 10th October. Feel free to wander up and say Hi.

Rolling Back SQL Server agent job owner changes

So in this post I showed you a technique for updating the owner on all your SQL Agent jobs.

Whilst this isn’t a major change, you do need to have checked that the new ownership account has the correct permissions or some thing (eg; EXECUTE AS or CMDEXEC steps). With this in mind someone asked me how I’d back this out. Either for all jobs or that one specific MUST NOT FAIL job that is failing.

As we have 2 ways to change them, we have 2 ways to back out the changes. Note that both of these require you to have done a small amount of work up front, there is no magic rollback option πŸ™

PowerShell and dbatools

This one is a little simpler as we can use a lot of PowerShell plumbing to do the lifting for us. To set this up we use:

Get-DbaAgentJob -SqlInstance server1 | Select-Object Name, OwnerLoginName | ConvertTo-Csv | Out-File .\server1-JobOwners.csv

If you’re not a PowerShell user I’ll just break this down into steps:

  1. We user dbatools’ `Get-DbaAgentJob` cmdlet to get all the Agent jobs on Server1
  2. We pipe the ouput to `Select-Object`, we select just the job Name and it’s Owner’s login name
  3. The output is then converted to a CSV object
  4. Finally we write the date to a file in the current folder

So we’ve created a nice csv list of the original owners of our jobs, great. Now how do we use those to roll back?
To roll back every job:

Get-Content .\server1-JobOwners.csv | ConvertFrom-Csv | Foreach-Object {Set-DbaAgentJob -SqlInstance Server1 -Job $_.Name -OwnerLoginName $_.OwnerLoginName}

To rollback just one job it’s very similar, we just insert a Where-Object step into the pipeline:

Get-Content .\server1-JobOwners.csv | ConvertFrom-Csv | Where-Object -Name -eq 'My Job' | Foreach-Object {Set-DbaAgentJob -SqlInstance Server1 -Job $_.Name -OwnerLoginName $_.OwnerLoginName}

In both cases we read in our CSV, convert it from CSV into a PowerShell Object. For a single job we use Where-Object to filter down to that one job, you could also use like if you wanted to pattern match. The remain records then get piped through a ForEach-Object loop, where we use Set-DbaAgentJob to reset the jobs owner.

T-SQL

The pre setup for this one involves creating and populating a table to store the old values:

CREATE TABLE JobOwnerHistory(
  Job_Id char(36),
  JobName nvarchar(128),
  OwnerLoginName nvarchar(50)
)

INSERT INTO JobOwnerHistory
  SELECT 
    sj.job_id, 
    sj.JobName,
    sp.name as 'OwnerLoginName' 
  FROM 
    msdb..sysjobs sj 
    INNER JOIN sys.server_principals sp on sj.owner_sid=sp.sid

So now, resetting a job’s owner is just a modification of our original script:

DECLARE @job_id char(36)
DECLARE @OwnerLoginName varchar(50)
DECLARE JobCursor CURSOR
FOR
SELECT 
  job_id,
  OwnerLoginName
FROM
  JobOwnerHistory
--WHERE JobName LIKE '*Finance*'
OPEN JobCursor 
FETCH NEXT FROM JobCursor INTO @job_id, @OwnerLoginName
WHILE (@@FETCH_STATUS <> -1)
BEGIN
exec msdb..sp_update_job
    @job_name = @job_id,
    @owner_login_name = @OwnerLoginName
FETCH NEXT FROM JobCursor INTO @job_id, @OwnerLoginName
END 
CLOSE JobCursor 
DEALLOCATE JobCursor 

As written that will reset every job in JobOwnerHistory, if you want to filter down to a subset of tables you’d uncomment and modify the WHERE line

Hope those examples are helpful. If you’ve stumbled across this page and it doesn’t quite fix your problem, please drop a comment and I’ll try to lend a hand.

Granting Azure Roles to Users or groups without a SignInName in Azure AD

Whilst sorting some permissions out on a resource group I stumbled across an issue with assigning them to a group. We have a replicate On Premise and Azure Active Directory solution in place. In the On Premise AD not all groups are Distribution groups so don’t have a ‘group1@contoso.com’ sign in. This causes a problem when trying to assign a role in the usual PowerShell way:

New-AzureRmRoleAssignment -SignInName 'group1@contoso.com' -RoleDefinitionName Contributor -ResourceGroupName MyRg

or

New-AzureRmRoleAssignment -SignInName 'group1' -RoleDefinitionName Contributor -ResourceGroupName MyRg

both return a nice red error message:

'New-AzureRmRoleAssignment : The provided information does not map to an AD object id.
At line:1 char:1
+ New-AzureRmRoleAssignment -SignInName group1@contoso.com -RoleD ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : CloseError: (:) [New-AzureRmRoleAssignment], KeyNotFoundException
    + FullyQualifiedErrorId : Microsoft.Azure.Commands.Resources.NewAzureRoleAssignmentCommand'

Β 

The workaround for this is to use the ObjectID parameter, and to grab the ObjectId like this:

$objectid = (Get-AzureRmADGroup -DisplayName "Our Developers").Id
New-AzureRmRoleAssignment -ObjectId $objectid -RoleDefinitionName Contributor -ResourceGroupName MyRg

Just another one of those little oddities that crop up when you’ve got a long established infrastructure that doesn’t quite fit the modern templates πŸ˜‰

dbachecks – SQL Server compliance testing with simple configuration management

if you’ve not heard yet, the people behind the dbatools PowerShell module (including me) have a new toolset for you, dbachecks. dbachecks uses Pester to let you validate your SQL Server estate in a simple way and generate meaningful graphical reports. The official launch of the module is at SQL Bits 2018

Out of the box dbachecks uses test values that we’ve found to be the most appropriate from our years of experience with SQL Server. But these may not be the best values for your particular organisation. For example, we expect to see a Full backup less than 24 hours old for each database, in your case you might only take a Full backup once a week and use differential backups during the week. So we needed a flexible and simple system to let you change the values. dbatools friend Friedrich Weinmann (b | t) has written a great PowerShell module framework called PsFramework which we’ve integrated into dbachecks to handle the configuration of the tests

Out of the box dbachecks will load it’s default config values from the file .\internal\configurations\configuration.ps1 (#! link to GH). To see the current values use Get-DbcConfig:
Get-DbcConfig ouput

I’ve just picked the top few rows for the screenshot, as of writing (15/02/2018) there are 98 config options available. We’ve tried to make the configuration option names obvious. There are main sections of configuration values group:

  • app – Configuration for the module
  • domain – Configuration for authentication
  • mail – Configuration for sending reports
  • policy – The values for the tests
  • skip – Controls which tests should be excluded

And each has a useful desription as well to make it easy to find. In the screenshot you’ll notice that I’ve configured the mail.* options to suit my test environment. So now I can use Send-DbcMailMessage to email my test results without having to specify all the parameters. To set a configuration parameter you use the Set-DbcConfig command:

Set-DbcConfig -Name mail.subject -Value "DbcChecks report"

Set-DbcConfig example

Or perhaps as mentioned at the start of this post, you don’t want a Full backup in the last day, but in the last 7 days. You can easily configure that:

Set-DbcConfig -Name policy.backupfullmaxdays -Value 7

And now your tests will be checking you’re never more than 7 days from a full backup. By default we’re checking that you’ve taken a Differential backup in the last 25 hours, so you’re now good to go!

And don’t worry, you don’t need to reset these every time you run the tests. The PsFramework module persists these non default sessions in the Windows registry at:

Computer\HKEY_CURRENT_USER\Software\Microsoft\WindowsPowerShell\PSFramework\Config\Default

So on my system we can see:
DbaChecks registry storage

Now what happens if you want to distribute these config changes to multiple servers, or share them with colleagues to make sure you’re all singing the same tune. We’d strongly recommend you don’t modify the .\internal\configurations\configuration.ps1 file directly, as that will be replaced whenever you update the module. To make your life easier (which is the entire point of the module) we’ve included Export and import functionality. To export the configuration to an easy to parse JSON file you simply run:

Export-DbcConfig -Path c:\path\of\yourchoice\filename.json

If you omit the path value then by default we will export the results to $script:localapp\config.json. Now you’ve got a simple JSON file it’s easy to source control (you are using source control aren’t you?) to keep track of changes and make sure they were implemented as expected. If you want to apply that configuration to another install, then it’s simply a case of running the import command:

Import-DbcConfig -Path c:\path\of\yourchoice\filename.json

and you’re done. We support UNC, so again it’s simple to have a central repository to apply the same configuration. And it works well with just fragments of configuration as well, so if all you wanted to control were the email settings you can create a JSON file like this:

[
    {
        "Name":  "mail.failurethreshhold",
        "Value":  0,
        "Description":  "Number of errors that must be present to generate an email report"
    },
    {
        "Name":  "mail.from",
        "Value":  "null@stuart-moore.com",
        "Description":  "Email address the email reports should come from"
    },
    {
        "Name":  "mail.smtpserver",
        "Value":  "smtp.stuart-moore.com",
        "Description":  "Store the name of the smtp server to send email reports"
    },
    {
        "Name":  "mail.subject",
        "Value":  "DbcChecks report from stuart",
        "Description":  "Subject line of the email report"
    },
    {
        "Name":  "mail.to",
        "Value":  "SqlReports@stuart-moore.com",
        "Description":  "Email address to send the report to"
    }
]

Then you can import just this snippet to set the configuration for those options. Makes it easy to seperate out enterprise level configuration changes from the actual SQL test options. The same technique can be used to ‘force’ the correct backup testing parameters in all cases, while letting other tests be customised as needed.

Running simple PowerShell commands against multiple servers with a timeout

PowerShellMany years ago I wrote a post on passing functions into Start-Job (Calling a PowerShell function in a Start-Job script block when it’s defined in the same script). Over the years I’ve had a number of emails from people asking about how to use it to fix their own situation. In many of these cases there’s been a simpler way to achieve what they wanted to do.

Sometimes a non obvious solution using PowerShell basics can be much simpler. Patrick posted up that he was having problems running a command against a list of servers and wanting to be able to skip those that time out. Using an injected function for this was a bit of overkill. I had a premade solution I use, which relies on Invoke-Command and is simpler:

$session = New-PSSession -ComputerName Server1, server2,server3 -ErrorAction SilentlyContinue
$ServiceName = &#039;MSSQL`$&#039;
$timeout = 10
$remote = Invoke-Command -Session $session -ScriptBlock {Get-Service -Name *$using:servicename*} -AsJob 
$remote | Wait-Job  -Timeout $timeout
$output = $remote | Receive-Job 
$output | ForEach {&quot;$($_.Name) on $($_.PsComputerName) is $($_.Status)&quot;}

We setup new PsSessions using New-PsSession, I set ErrorAction to SilentlyContinue just in case a host isn’t available for some reason (if I was being good I’d try/catch here).

As we’re just using PS standard functionality here with Get-Service there’s no need to build a a new function, we can just call this directly here. By calling Invoke-Command against a session pointed at numerous hosts we can PowerShell handle all the connection management here and just assume the command will be ran against each host. if we were running against a lot of hosts then we would want to look into using the -ThrottleLimit parameter to limit the number of concurrent hosts we’re hitting. The one little trick here is using the using scope modifier here so PS pulls in the variable defined in our main scope (gory details on scoping here

As we called Get-Service with the -AsJob switch we can now treat it as a job, so we can use the PS jobs cmdlets to manage it. The first thing we want to do is skip those jobs that are taking longer than out specified timeout value (in this case 10 seconds). So we pass out Invoke-Command job into Wait-Job with a Timeout parameter. PS will now keep an eye on each job and drop those exceeding our timeout limit.

Once we’ve gotten all the jobs that met our timelimit we grab the ouput using Receive-Job, and then just process it like any other PowerShell object

As we’ve removed a lot of complexity it’s much easier to revise or reuse the framework at a later date.

Nottingham and East Midlands PowerShell User Group

PowerShellInterested in any form of PowerShell usage and based around the East Midlands and Nottingham? Then this could be the group for you. We’re looking to cover anything that uses (or can be used with) PowerShell. So topics that are fair game include:

  • AD Management
  • Scripting
  • Source Control
  • DevOps
  • Azure/AWS/Cloud Provider of your choice
  • Exchange management
  • SQL Server
  • Pester testing
  • Continuous Integration
  • .Net Internals
  • Generally anything that would interest someone using PowerShell or give their career a boost!

Nothing is set in stone yet as we want to get some feedback from potential members. There’s a date booked for the 8th Febuary (Kick Off meeting), but what happens is up to you.

Would you prefer a traditional usergroup with booked speakers given presentations in a formal setting, or something more informal like a roundtable/whiteboard sessions? Or perhaps half and half?

We’d love to know what you’d like to see or learn. So please either drop a comment below or sign up for the Kick Off Meeting and give us feedback on Meetup.

Complex SQL Server restore scenarios with the dbatools Restore-DbaDatabase pipeline

dbatools logoNo matter how hard the dbatools team try, there’s always someone who wants to do things we’d never thought. This is one of the great things with getting feedback direct from a great community. Unfortunately a lot of these ideas are either too niche to implement, or would be a lot of complex code for a single use case

As part of the Restore-DbaDatabase stack rewrite, I wanted to do make things easier for users to be able to get their hands dirty within the Restore stack. Not necessarily needing to dive into the core code and the world of Github Pull Requests, but by manipulating the data flowing through the pipeline using standard PowerShell techniques All the while being able to do the heavy listing with out code.

So, below the fold we’ll be looking at some examples of how you can start going to town with your restores

Read More

Page 1 of 5

Powered by WordPress & Theme by Anders Norén