Musings of a Data professional

Stuart Moore

Category: powershell Page 2 of 7

Day 9 – Limitations of Restore-DbaDatabase – 31 days of backup and restores

Limitations of Restore-DbaDatabase

Like all tools, Restore-DbaDatabase isn’t able to do everything that everyone wants it to. Certainly, at the moment I’d like it to write it’s own blog posts and fetch me a cold beer, but that doesn’t happen

A lot of the below isn’t complaining about people asking for features. If we can do it, we will, and we’re keen to make this work for as many people in as many situations as possible

But quite a few requests over the years have been non starters for a number of reasons.

Day 8 – Why is Restore-DbaDatabase like it is? – 31 days of dbatools backup and restores

Why is Restore-DbaDatabase like it is?

At first glance Restore-DbaDatabase looks like a slow lumberig complex beast. In reality it’s not that bad.

It’s the result of design decisions I took in wanting a solid versatile command that could cope with everything that people would want from it.

In this post, we’ll go through the main decisions/points of contention one by one

Day 7 – Backup up to Azure – 31 days of dbatools backup and restore

Day 7 – Backing up to Azure

Backing up to Azure? Why would you want to do that if you’re not running Azure SQL Database/Managed Instance?

There’s a couple of reasons, which you may or may not agree with:

  • Reliable storage
  • Off site storage
  • Scalable storage
  • Cheap(ish) storage

All of those of course are debatable depdending on your circumstances. But if you’re starting from scratch and don’t already have a on premises storage infrastructure then it’s quicker to start there and build up. And if you’re running SQL Server inside an Azure VM do you want to be piping data back down to the home network or paying for another VM to act as a backup target?

The main reason I like Azure storage when I can use it is number 2 – Offsite storage.

My backups will survive a major incident, say a failed Data Centre or a major fire. OK, those could be mediated with a secondary Data Centre, but that’s an extra level of expense that your organisation might not be able to justify.

And there are disasters that may affect all storage in your organisation. Ransomware being the one that springs to mind. If your SQL Server backups get encrypted by someone other than yourself, then they are worthless. You have NO way of recovering without paying the money. It has happened!

Companies who think they have good security have found that ransomware can still propogate onto their backup servers as the software is written to exploit any weakness in NTLM or SMB it can fine. However, I’ve yet head of one that can break into SQL Server and get hold of a credential which is what would need to happen for it to even know about SQL Server backups stored in Azure.

And if you’re really paranoid (you’re probably a DBA, so there’s a good chance) you can even use an Azure Function to move all new backups to another Azure Storage account when they finish, so then theres no tracable link anywhere on premises.

Setting up Azure for SQL Server Backups

Now, I’m going to cheat here a little point you at a post I’ve already made that covers setting up Azure Blob storage for SQL Server Backups

If you want to use the Azure Portal – Creating Azure Blob Storage Account for SQL Server Backup and Restore, using the Azure Portal

If you want to use PowerShell – Creating Azure Blob Storage Account for SQL Server Backup and Restore, using PowerShell

As I say in the above posts, I’d really really recommend you go with Shared Access Signatures. Better security granularity, easier to work with and they’re the future.

Once you’ve created your Azure Blob storage account and the SQL Server credential, you can use Backup-DbaDatabase to backup to Azure.

When backing up to Azure you are limited to a subset of parameters, this purely because Azure Blob storage won’t support things like MaxTransferSize. The allowed paramters are:

  • CopyOnly
  • Type
  • CompressBackup
  • Checksum
  • Verify
  • AzureCredential
  • ReplaceInName
  • CreateFolder
  • OutputScriptOnly
  • EncrytionAlgorithm
  • EncryptionCertificate

When using a Shared Access key you can also use “FileCount”

Backing up to Azure with Backup-DbaDatabase

There are 2 ways the command can be used to backup to Azure, depending on whether you’re using Access Keys or Shared Access Signatures.

With Access keys you use this format:

    Backup-DbaDatabase -SqlInstance server1\instance -Database db1 -AzureBaseUrl https://dbatoolsaz.blob.core.windows.net/azbackups/ -AzureCredential myAzureCred

Where myAzureCred is the name of the credential on server1\instance that holds the Azure Access keys. When using the older Access Keys style you cannot stripe backups, so can only provide one URL

With a Shared Access Signature you use this format:

    Backup-DbaDatabase -SqlInstance server1\instance -Database db1 -AzureBaseUrl https://dbatoolsaz.blob.core.windows.net/azbackups/

This will work as Backup-DbaDatabase will look for a credential named ‘https://dbatoolsaz.blob.core.windows.net/azbackups/’ on instance ‘server1\instance’

With Shared Access Signature credentials you can stripe files, either within the same container:

    Backup-DbaDatabase -SqlInstance server1\instance -Database db1 -AzureBaseUrl https://dbatoolsaz.blob.core.windows.net/azbackups/ -FileCount 3

or by specifying multiple AzureBaseUrl values:

    Backup-DbaDatabase -SqlInstance server1\instance -Database db1 -AzureBaseUrl https://dbatoolsaz.blob.core.windows.net/azbackups/,https://azdbatools.blob.core.windows.net/azbackups/ 

Assuming you have valid ‘https://dbatoolsaz.blob.core.windows.net/azbackups/’ and ‘https://azdbatools.blob.core.windows.net/azbackups/’ in the SQL instance performing the backup.

Conclusion

Backing up your SQL instances to Azure can be a good option if your organisation is just starting out, and by having backups off site takes some of the risks during a Disaster Recovery scenario can be minimised. Plus it’s a great way to migrate database up to Azure, in fact it’s Copy-DbaDatabase works.

For more ideas on how to use dbatools to migrate SQL Server databases to Azure have a look at the book dbatools in a month of lunches

All posts in this series can be found at 31 Days of dbatools Backup and Restores

Day 5 – Performance Tuning Backups – 31 Days of dbatools Backup and Restores

Day 5 – Performance Tuning Backups

Why does backup performance matter?

As your database gets bigger, backups take longer to complete. This makes sense, more data has to be written to disk which is going to take longer.

The reason it becomes a problem is that your organisation will have a Recovery Point Objective (RPO), which is the most data loss they’re willing to accept in the event of a disaster. Now, it might not be officially announced but I’m sure management has a number that’s lower than you think.

To hit your RPO you need to have less time than that between your backups. So let’s say your RPO is 15 minutes. This means you need a backup at LEAST every 15 minuted to hit this target. If your transaction log backups are taking 20 minutes, you’ll never make it.

Obviously we can’t reduce the amount of data being created, your boss won’t thank you for rate limiting customers!

So all we can do is to make things flow out faster. Let’s see how we do that:

Day 4 – Backup Encryption with Backup-DbaDatabase – 31 days of dbatools backup and restores

Day 4 – Backup Encryption with Backup-DbaDatabase

Now, we all take our db security very seriously (or at least I hope we do?). But a lot of people don’t think this through to their database backups

A database backup is a full copy of the database, by definition all of the data is in there

So if someone get’s their hands on it? Then they may as well have dbo on your production database, as they can just restore it on their own SQL Instance and go to town on the data.

That is, unless you’ve implemented some security around your backups

An introduction to dbatools backup and restores – Day 1 of 31 days of dbatools Backup and Restores

Day 1 of 31 – Introduction

Welcome to Day 1 of 31 days of posts on dbatools backup and restore capabilities.

Wait, that sounds like a lot of posts about dbatools backup and restores? Well, some years ago I did a similar series on standard PowerShell Backup and Restores, and all of that plus a lot more is now built into the dbatools module.

This means there’s a lot of power hidden in Backup-DbaDatabase and Restore-DbaDatabase, but that also means it’s easy to miss features and options. So 31 days is about right to cover everything I think

Currently it’s 1st May 2020, and we’re in a lockdown in the UK. Which you’d think means lots of freetime right? It’s not quite working out like that, so there may be a gap when real life takes over. But I’m going to try my best for a post every day for this month

Why me?

I wrote a lot of the Restore and Backup code, so I’ve got a good idea of how it hangs together and just why it was designed in certain ways. It wasn’t just to annoy people, there was a plan to offer some big power featured. Really!

About the examples

Just to prove the flexibility of things I’ll be using as many different versions of PowerShell and SQL Server to demo thing. If you want to follow along then Windows PowerShell 5.1 and Sql Server -gt 2008 will be fine.

Level

We’ll be starting off with simple concepts and examples, and then building on those. Most things we’ll be looking at won’t be to technical unless we’re inside the functions. If something doesn’t make sense, then please let me know and I’ll do my best to clear anything up.

So, that’s Day 1. Tomorrow we’ll begin by taking a look at the basics of Backup-DbaDatabase

31 Days of Backup and Restore with dbatools

So I’m doing another 31 days of posts, hopefully consecutive days but given current pandemics and sporadic working hours it may not be.

This is the index of all the posts, I’ll update it as each one is posted.

Posts

 

dbaSecurityScan update

As mentioned before, I’ve started working on dbaSecurityScan, a PowerShell module for tracking changes to a databases security model and bring the database back in to line.

Making the most of not being able to leave the house for very long I’ve been cracking along on getting the basics in place. Unlike a PowerShell module like dbatools, this one doesn’t break down nicely into small lumps so it’s likely that new features will be added in big lumps until we hit a critical mass.

So, I’ve just pushed up a nice big change that shows some of the core features working properly. In the current version we have:

  • Creating a database security config from an existing database for the following security principals
    • Users
    • Objects
    • Roles
    • Schemas
  • Test the database security config against a database
  • Take the test results and generate a list of actions that need to be undertaken to bring the database back inline
  • Run a ‘WhatIf ‘ run of the fixes to generate a list of actions to be undertaken
  • Apply the fixes to bring a database back in line, with a choice of running all actions, just those that add/grant security measures or just those that drop/revoke security measures

Some code to show how those features currently work:

# This is based on the module state on 18th April 2020, it will have moved on since then!
Import-module dbaSecurityScan
# Create config and test against a new database
$config = New-DssConfig SqlInstance server1\sql2017 Database db1
$config | ConvertTo-Json Depth 5 | Out-File \\file\store\db1SecurityConfig.json
# time passes and we want to check for security drift
# hydrate the config back into an object
$config = ConvertFrom-Json InputObject (Get-Content \\file\store\db1SecurityConfig.json raw)
# Test the database against the config and vice versa
# This will run all the tests and store the results of pass/fail in $testresult.
$testResults = Invoke-DssTest SqlInstance server1\sql2017 Database db1 Config $config
# If you don't want to see all of the Pester output as well, then you can
$testResults = Invoke-DssTest SqlInstance server1\sql2017 Database db1 Config $config Quiet
# Now we can use the results to bring the database back to security baseline
# Let's first sanity check any fixes, -OutputOnly will just output the intended actions and not apply any.
$fixResults = Reset-DssSecurity SqlInstance server1\sql2017 Database db1 TestResult $testResults OutputOnly
# Once you've approved the changes, then can be applied like so:
$fixResults = Reset-DssSecurity SqlInstance server1\sql2017 Database db1 TestResult $testResults

On the to do list for the next couple of weeks of lockdown are:

  • Add fixing functions for roles and objects
  • Add Pipeline support
  • Add better comment based help
  • Allow comparison of config documents

Please get in touch if you’ve got any ideas for features. If you want to lend a hand then I’m happy to take Pull Request over at https://dbasecurityscan.io/.

Please also report any bugs/issues over on github as well.

dbaSecurityScan – A new PS module for SQL Server security

While doing some work on my materials for SqlBits training day I started thinking about a few of the problems with managing SQL Server permissions.

How easy it to audit them? If someone asks you the DBA exactly who has access to object A, can you tell them? How do people get access to that object, is it via a role, a schema or an explicit permission?

Is that information in an easy to read or manipulate manner?

How do you ensure that permissions persist between upgrades? I’ve certainly seen 3rd party upgrades that have reset database level permissions. Do you have a mechanism to check every permission and put them back as they were?

We’re all doing the devops these days. Our database schema is source controlled, and we’re deploying it incrementally in pipelines and testing it. But are we doing that with our database security?

So in the classic open source way, I decided to scratch my own itch by writing something. That something is dbaSecurityScan, a PowerShell module that aims to offer a solution for all of the above.

The core functionality atm allows you to export the current security state of your database, based on these 4 models:

  • Role based
  • User based
  • Schema based
  • Object based

You can also return all of them, or a combination of whatever you need.

At the time of writing, getting the security information and testing it is implemented, and you can try it out like this:

Import-module DssSecurityScan
Create config and test against a new database
$config = New-DssConfig SqlInstance server1\sql2017 Database oldProd
Invoke-DssTest SqlInstance server2\sql2019 Database newProd Config $config
Save a config, and then use it audit after a change
New-DssConfig SqlInstance server1\sql2017 Database oldProd ConfigPath ~/git/dbSourceControl/permissions.json
Start-Sleep Seconds 10080
$config = ConvertFrom-Json InputObject (Get-Content ~/git/dbSourceControl/permissions.json raw)
Invoke-DssTest SqlInstance server2\sql2019 Database newProd Config $config

If you’ve got any suggestions for features, or want to lend a hand then please head over to dbaSecurityScan and raise an issue or pull request respectively 🙂

Making SQL Agent Jobs Availability Group aware with dbatools

A new system has rocked up at work. To keep the database nice and available across a couple of sites we’ve implemented a SQL Server Availability Group solution

The setup for Availability Groups is well documented and dbatools has plenty of AG commands to help out and keep things in sync across the replicas.

But our issue was coping with all the 3rd party SQL Server stored procedures that weren’t Availability Group aware.

What do I mean by Availability Group aware? When running on an Availability Group, one SQL Server instance ‘owns’ the database at any point in time, but the SQL Agent jobs have to be replicated across all of the instances in the cluster. So you want to make sure that your SQL Server Agent jobs only do work on the instance that currently owns the Availability Group.

Doing this is pretty simple. Below is a piece of T-SQL that checks if the current SQL Server Instance is the primary instance in the AG. If it isn’t then we exit with an error.

IF (SELECT 
	repstate.role_desc
        FROM sys.dm_hadr_availability_replica_states repstate 
			INNER JOIN sys.availability_groups ag 
				ON repstate.group_id = ag.group_id AND repstate.is_local = 1) != 'Primary'
    BEGIN
       RAISERROR ('Not Primary', 2, 1)
    END

We exit with an error so we can make use of a SQL Agent Jobsteps ‘OnFailure’ option to quietly exit the job.

Why do we want to quietly exit the job? If we exit with an error, then your monitoring system will hammer you with lots of alerts of regularly failing jobs (you are monitoring your SQL Agent jobs aren’t you?).

As we’re going to be using PowerShell to push this around a lot of jobs, let’s throw it into a variable:

$stepsql = "IF (SELECT 
	repstate.role_desc
        FROM sys.dm_hadr_availability_replica_states repstate 
			INNER JOIN sys.availability_groups ag 
				ON repstate.group_id = ag.group_id AND repstate.is_local = 1) != 'Primary'
    BEGIN
       RAISERROR ('Not Primary', 2, 1)
    END"

Next we’re going to grab all the Agent jobs we want to update. Luckily for me, the company prefixed all of their jobs with a unique stamp, so I just used a filter on the job name:

$jobs = Get-DbaAgentJob -SqlInstance MyInstance | Where-Object {$_.Name -like 'SVC_*'}

To keep things easy to read and save line wrapping, I like to use parameter splatting to keep it clean. So we create a hashtable of values like so:

$jobParameter = @{
    SqlInstance = 'MyInstance'
    StepName = 'AgCheck'
    Database = 'Master'
    Subsystem = 'TransactSql'
    StepId = '1'
    OnFailAction = 'QuitWithSuccess'
    OnSuccessAction = 'GoToNextStep'
    Command = $stepsql
    Insert = $True
}

The Insert switch is new as of 15th October 2019 (I’ve just added it via a Pull Request). When it’s specified the command will insert the new step at the stepid specified. So in this example, it’s going to be the first step executed as the steps start from 1

The Insert switch causes the command to increment the StepID of all subsequent Job steps by 1 so it can fit in. It will also increment the OnFailStep and OnSuccessStep values if the target steps have been moved so the flow isn’t affected.

In this example we set our OnFailAction to be QuitWithSuccess, as mentioned above this will stop our logging system filling up

All that’s left is to loop through all of the jobs in our collection and use New-DbaAgentJobStep to insert it:

Foreach ($job in $jobs) {
    New-DbaAgentJobStep -Job $job @jobParameter
}

To do this across the other Availability Group nodes we have 3 options, we can either modify out hashtable to make use of New-DbaAgentJobStep‘s ability to target multiple SQL Server instances:

$jobParameter = @{
    SqlInstance = ('MyInstance','MyInstance2','MyInstance3')
    StepName = 'AgCheck'
    Database = 'Master'
    Subsystem = 'TransactSql'
    StepId = '1'
    OnFailAction = 'QuitWithSuccess'
    OnSuccessAction = 'GoToNextStep'
    Command = $stepsql
    Insert = $True
}

Or setup and test on a single now, and then use Sync-DbaAvailabilityGroup. This will sync a wide range of objects around an Availability Group (jobs, logins, credentials, custom errors, and many more). If you only want to synchronise the SQL Server agent jobs then Copy-DbaAgentJob will do just that.

Hopefully this little change is going to make a few people’s life easier, it’s certainly done that for me.

Page 2 of 7

Powered by WordPress & Theme by Anders Norén