While setting up for a large migration project involving a third party I had to move a load of tables out of the way to accommodate a sudden change in approach. This happens fairly often with our dev environments. So I thought I’d share a couple of approaches I use to do this. First up , renaming tables using T-SQL and then renaming tables with dbatools
WARNING!!!!!
Renaming any SQL Server needs to be done carefully. Any existing references to the object will break. So any view or stored procedure referencing the table will stop working until you fix it.
This can be a useful trick if you’ve been told you can remove some tables. Rename them, leave them for a day, and then if anyone screams you can rename them back
To do this we’re going to generate the SQL statements using system views and then running them.
Lets say we have a bunch of tables you want to rename. For the first example we want to prefix a bunch of tables whose names contain ‘user’ with ‘tmp_’ while other testing goes on in the database
The key to this is using sys.all_objects. To get our list of tables to rename, we use some simple T-SQL:
select name from sys.all_objects where name like '%user%' and type_desc='USER_TABLE';
Adding something to specify the type of object is ALWAYS a good idea, just to avoid trying to rename a stored procedure or system table.
Now use the T-SQL to build up sp_rename statements:
select ''exec sp_rename '''+name+''', ''tmp_'+name+''';' where name like '%user%' and type_desc='USER_TABLE';
Which will give you a nice little list of SQL statements that you can then copy and paste into a query window to run.
Whilst the dbatools module doesn’t have a Rename-DbaTable command (yet!), you can still use dbatools to rename tables with a little bit of added Powershell
The first part of the pipeline is going to be grab all the tables from the database using Get-DbaDbTable. Then pass that into Where-Object to filter down to the tables we want. Then into a ForEach loop for the actual rename which we do with the SMO rename() method:
While doing some work on my materials for SqlBits training day I started thinking about a few of the problems with managing SQL Server permissions.
How easy it to audit them? If someone asks you the DBA exactly who has access to object A, can you tell them? How do people get access to that object, is it via a role, a schema or an explicit permission?
Is that information in an easy to read or manipulate manner?
How do you ensure that permissions persist between upgrades? I’ve certainly seen 3rd party upgrades that have reset database level permissions. Do you have a mechanism to check every permission and put them back as they were?
We’re all doing the devops these days. Our database schema is source controlled, and we’re deploying it incrementally in pipelines and testing it. But are we doing that with our database security?
So in the classic open source way, I decided to scratch my own itch by writing something. That something is dbaSecurityScan, a PowerShell module that aims to offer a solution for all of the above.
The core functionality atm allows you to export the current security state of your database, based on these 4 models:
Role based
User based
Schema based
Object based
You can also return all of them, or a combination of whatever you need.
At the time of writing, getting the security information and testing it is implemented, and you can try it out like this:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
If you’ve got any suggestions for features, or want to lend a hand then please head over to dbaSecurityScan and raise an issue or pull request respectively 🙂
So in this post I showed you a technique for updating the owner on all your SQL Agent jobs.
Whilst this isn’t a major change, you do need to have checked that the new ownership account has the correct permissions or some thing (eg; EXECUTE AS or CMDEXEC steps). With this in mind someone asked me how I’d back this out. Either for all jobs or that one specific MUST NOT FAIL job that is failing.
As we have 2 ways to change them, we have 2 ways to back out the changes. Note that both of these require you to have done a small amount of work up front, there is no magic rollback option 🙁
PowerShell and dbatools
This one is a little simpler as we can use a lot of PowerShell plumbing to do the lifting for us. To set this up we use:
In both cases we read in our CSV, convert it from CSV into a PowerShell Object. For a single job we use Where-Object to filter down to that one job, you could also use like if you wanted to pattern match. The remain records then get piped through a ForEach-Object loop, where we use Set-DbaAgentJob to reset the jobs owner.
T-SQL
The pre setup for this one involves creating and populating a table to store the old values:
CREATE TABLE JobOwnerHistory(
Job_Id char(36),
JobName nvarchar(128),
OwnerLoginName nvarchar(50)
)
INSERT INTO JobOwnerHistory
SELECT
sj.job_id,
sj.JobName,
sp.name as 'OwnerLoginName'
FROM
msdb..sysjobs sj
INNER JOIN sys.server_principals sp on sj.owner_sid=sp.sid
So now, resetting a job’s owner is just a modification of our original script:
DECLARE @job_id char(36)
DECLARE @OwnerLoginName varchar(50)
DECLARE JobCursor CURSOR
FOR
SELECT
job_id,
OwnerLoginName
FROM
JobOwnerHistory
--WHERE JobName LIKE '*Finance*'
OPEN JobCursor
FETCH NEXT FROM JobCursor INTO @job_id, @OwnerLoginName
WHILE (@@FETCH_STATUS <> -1)
BEGIN
exec msdb..sp_update_job
@job_name = @job_id,
@owner_login_name = @OwnerLoginName
FETCH NEXT FROM JobCursor INTO @job_id, @OwnerLoginName
END
CLOSE JobCursor
DEALLOCATE JobCursor
As written that will reset every job in JobOwnerHistory, if you want to filter down to a subset of tables you’d uncomment and modify the WHERE line
Hope those examples are helpful. If you’ve stumbled across this page and it doesn’t quite fix your problem, please drop a comment and I’ll try to lend a hand.
No matter how hard the dbatools team try, there’s always someone who wants to do things we’d never thought. This is one of the great things with getting feedback direct from a great community. Unfortunately a lot of these ideas are either too niche to implement, or would be a lot of complex code for a single use case
As part of the Restore-DbaDatabase stack rewrite, I wanted to do make things easier for users to be able to get their hands dirty within the Restore stack. Not necessarily needing to dive into the core code and the world of Github Pull Requests, but by manipulating the data flowing through the pipeline using standard PowerShell techniques All the while being able to do the heavy listing with out code.
So, below the fold we’ll be looking at some examples of how you can start going to town with your restores
A new version of dbatools Restore-DbaDatabase command was released into the wild this week. One of the main aims of this release was to make it easier to debug failures in the restore process, and to drag information out of the pipeline easily (and anonymously) so we can increase our Pestering of the module with Unit and Integration tests.
So I’d like to share some of the features I’ve put in so you can take part.
The biggest change is that Restore-DbaDatabase is now a wrapper around 5 public functions. The 5 functions are:
Get-DbabackupInformation
Select-DbabackupInformation
Format–DbabackupInformation
Test–DbabackupInformation
Invoke-DbaAdvancedRestore
These can be used individually for advanced restore scenarios, I’ll go through some examples in a later post.
For now it’s enough to know that Restore-DbaDatabase is a wrapper around this pipeline:
and it’s other function is passing parameters into these sub functions as needed.
With version of Restore-DbaDatabase you were restricted to throwing data into one end, and seeing what came out of the other end, with some insight produced by Verbose messages. Now things can be stepped through, data extracted as need, and in a format that plugs straight into out testing functions.
Get-DbaBackupInformation
This is the function that gets all of the information about backup files. It scans the given paths, and uses Read-DbaBackupHeader to extract the information from them. This is stored in a dbatools BackupHistory object (this is the same as the output from Get-DbaBackupHistory, so we are standardising on a format for Backup information to be passed between functions).
So this would be a good place to check that you’ve gotten the files you think you should have, and is also the first place we’d be looking if you had a report of a break in the LSN chain
To get the output from the pipeline at this point we use the GetBackupInformation parameter:
Restore-DbaDatabase - -GetBackupInformation gbi
This will create a globally scoped variable $gbi containing the ouput from Get-DbaBackupHistory. Note, that when passing the name to Restore-DbaDatabase you do not need to specify the $.
If you want to stop execution at this point, then use the -StopAfterGetBackupInformation switch. This will stop Restore-DbaDatabase from going any further.
This is also a good way of saving time on future runs, as the BackupHistory object can be passed straight in, saving the overhead of reading all the file heasers again:
Here we filter down the output from Get-DbaBackupInformation to restore to the point in time requested, or the latest point we can. This means we find :
– the last full backup before the point in time
– the latest differential between the full backup and the point in time
– and then all transaction log backups to get us to the requested time
This is done for every database found in the BackupHistory object
Here is where we’d begin looking for issues if you had a ‘good’ LSN chain from Get-DbaBackupInformation and then it broke.
To get this data you use the SelectBackupInformation parameter, passing in the name of the variable you want to store the data in (without the $ as per GetBackupInformation above)
There is also a corresponsing StopAfterSelectBackupInformation switch to halt processing at this point. We stop processing at the first stop in the pipeline, so specifying multiple StopAfter* switches won’t have an effect
Format-DbaBackupInformation
This function performs the transforms on the BackupHistory object per the parameters pushed in. This includes renaming databases, and file moves and rename. For everything we touch we add an extra property of Orignal to the BackupHistory object. For example the original name of the database will be in OriginalDatabase, and the target name will be in Database
So this is a good spot to test why transforms aren’t working as expected.
To get data out at this pipeline stage use the FormatBackupInformation paramter with a variable name. And as normal it has an accompanying StopAfterFormatBackupInformation switch to halt things there
Test-DbaBackupInformation
Before passing the BackupHistory object off to be restored we do some checks to make sure everything is OK. The following checks are made:
LSN chain complete
Does a destination file exist, if owned by a different database then fail
Does a destination file exist, if owned by the database being restored is WithReplace specfied
Can SQL Server see and write to all the destination folders
Can SQL Server create any destination folders missing
Can SQL Server see all the backup files
If a database passes all these checks then it’s backup history is marked as restorable by the IsVerified property being set $True.
To get the data stream out at this point use the TestBackupInformation parameter.
General Errors with restores
Once we’re past these stages, then our error reporting is at the mercy of the SMO Restore class. This doesn’t always provide an obvious cause straight away. Usually the main error can be found with:
$error[1] | Select-Object *
We like to think we capture most restore failure scenarios nicely, but if you find something we don’t then please let you know, either on Slack or by raising a Github issue
As usually the dbatools terminating error will be in $error[0].
Providing the information for testing or debugging.
If you’re running in to problems then the dbatools team may ask you to provide the output from one of these stages so we can debug it, or incorporate the information into our tests.
Of course you won’t want to share confidential information with us, so we would recommend anonymising your data. My normal way of doing this is to use these 2 stubbing functions:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This method will anonymise the values in ComputerName, InstanceName, SqlInstance, Database, UserName, Path, FullName, FileList, OriginalDatabase, OriginalFileList, OriginalFullName and ReplaceDatabaseName. But will produce the same output for the same input, so we can work with multiple database sets at once.
I hope that’s been of some help. As always if you’ve a question then drop a comment below, ping me on twitter (@napalmgram) or raise an issue with dbatools on Slack or Github
As part of my talk at the Nottingham SQL Server User group last night, I offered some tips on how to track SQL Server index usage over time.
sys.dm_db_index_usage_stats is a very handy table for seeing how many seeks/scans and updates your indexes get. But there are a few issues with. Firstly it’s a DMV, so it will be flushed on a number of events:
SQL Server Instance Restart
Database Detach
Database Close
SQL Server 2012/2014 – Alter Index rebuild
For a good thorough check normally a full months worth of stats are needed, this normally covers most of the business processes. I may event be useful to compare stats across months so you also capture events such as Financial Year end.
A better break down of when the scans and seeks happened can be very useful. It’s all well and good knowing your index was scanned 5000 times and Seeked across 15000 times, but what processes were doing that? Was it OLTP workload during 9-5 or was it the overnight processing? This could make a big difference to any planned Index changes.
So here’s a technique I use. It’s pretty simple and allows you to tune it to your own needs,
The SQLAutoRestores module as currently stands (04/08/2016 v 0.9.0.0) is very much based on my personal usage patterns, but the plan is to make it as flexible as possible. So if you see something in this workflow that you’d like to change, or need something to cope with your SQL Sever backup procedures, then please leave a comment below, or drop in a feature request at github – https://github.com/Stuart-Moore/SQLAutoRestores/issues
Current practice at work is that all SQL Server databases are backed up, either to a central share, or locally and then transferred to the central share. So we end up with a backup folder structure like:
I want to randomly pick a set of backup files to restore. So the first thing I want to do is to get all the ‘Bottom folders’ of this directory tree:
$folders = Get-BottomFolders \\backupserver\share
And if I’ve split the backups across multiple shares I can add more:
$folders += Get-BottomFolders \\server2\backups$\
So now I want to pick a folder at random from those. But, not everyone cleans up after themselves, so there could be folders that don’t contain anything we’re interested in. So there’s a Test-DBBackupsExist function that makes sure we have a file. So we loop until we get something useful:
Great, we’ve got a folder with some SQL Server backups in it. Now we need to see what’s in the backup files. So we do a scan of the file headers, which needs a SQL Server, so we build a SQL Server connection as well:
This returns a simple PowerShell Object containing the header hightlights from each file in the folder.
Note; at this point we’ve not checked we’ve gotten a complete restorable set of files. For all we know, we got 30 Transaction log files and no Full Backup to start from!
I prefer to restore databases to random points in time rather than just the latest available. This gives a wider range of options to compare, and might just mean that you’ll discover than your SAN is corrupting the 22:15 t-log backup.
The next function checks we’ve got at least one ‘anchoring’ full backup, picks the earliest point in time that backup covers, and then gets the latest point in time covered by the backup files, and returns a random point between those 2 extremes. This will be our Recovery Point Objective
Then we check we have enough space to restore. This includes checking for any file growth during the restore (if your transaction log grows to a stupid size during the day, then it’ll be grown to that size during the restore and sized down later on, so you need to accomdate the largest amount of space your database occupies, not just it’s final size):
And then we test the difference beteen the SQL Server version of the instance that did the backup and the SQL Server instance we’re asking to perform the restore. Microsoft state that restoring more that 2 major versions isn’t allowed, so we fail it in this case (non SQL Server backups aren’t supported (yet!))
Rinse, repeat ad infinitum. I’ve this process running 24×7 on a dedicated restore instance. On average I restore 80 databases a day and cover every production database in a 2 week windows (it’s random so not guaranteed, but I have a priority list that skews it!)
Currently I collect my statistics with some simple Send-MailMessage usage, but I want something more robust in this module, so thats on the list of things to get fixed before we go to 1.0.0.0 properly.
Hopefully that’s given some ideas on how to use the module. I’d love to hear any ideas on improvements or how you’d want to use it in your environment. Comment here, drop me an email, or ping me on twitter (accounts all linked top right).
Having had such a great time at last years SQL Saturday in Exeter, I was very happy to see the SQL Southwest team announce another one for 2014 on Saturday 22nd March (Register here).
So, I’ve been offered the chance to present 2 consecutive sessions covering 1 topic, 100 minutes to fill with something new. I’ve decided that based on feedback and questions from my “Using PowerShell for Automating Backups and Restores with PowerShell” presentation I’ll be filling the 100 minutes with:
Ever wonder how some people always seem to know the answer in SQL Server? Or seem to come up with it with a minimum of digging around?
Well, they’re not necessarily cleverer than you, it could just be that they’re better at playing with SQL Server than you are. Knowing how to quickly and simply build up a test scenario means whenever you want to try something out, or prove a hunch then you can be most of the way to proving something while others are still Googling and Binging away. Or if you’ve found something on the Internet, you can check it does actually do what it claims to. Because as Abraham Lincoln once said:
So here we are, post 34 in my 31 post series about using PowerShell to perform SQL Server backups and Restores. We’ve taken a fairly whistle stop tour of the main points. There’s been a couple of omissions, but hopefully I’ve provided you with plenty of examples you can use in your own situations, or at least some ideas about how you can approach automating your restores. I also hope I’ve shown you that automating those restores needn’t be the nightmare you might have been led to believe from reading through some of the dynamic T-SQL out there.
Much of the more advanced uses of these techniques is down to using deeper features of PowerShell rather the SMO features. If you want to learn more about PowerShell then I recommend the following books:
I have also been presenting on this topic at UK SQL Server User Groups this year. A set of slides, scripts and demos from one at Cardiff SQL User Group from October 2013 are available for download (cardiff-usergroup-powershell-backups-and-restores-01012013).
I’m also consolidating some of my production functions and routines into a PSBackup module, which I’m hoping to put up on GitHub so they can be worked on as a Open project by anyone who wants to contribute. When it’s up I’ll post an announcement.
I hope you’ve enjoyed the posts, and please feel free to leave a comment, drop me a mail or ping me on twitter if you have a questions.
This post is part of a series posted between 1st September 2013 and 3rd October 2013, an index for the series is available here.