Stuart Moore

Musings of a Data professional

Stuart Moore

Category: sql server (Page 1 of 3)

Rolling Back SQL Server agent job owner changes

So in this post I showed you a technique for updating the owner on all your SQL Agent jobs.

Whilst this isn’t a major change, you do need to have checked that the new ownership account has the correct permissions or some thing (eg; EXECUTE AS or CMDEXEC steps). With this in mind someone asked me how I’d back this out. Either for all jobs or that one specific MUST NOT FAIL job that is failing.

As we have 2 ways to change them, we have 2 ways to back out the changes. Note that both of these require you to have done a small amount of work up front, there is no magic rollback option 🙁

PowerShell and dbatools

This one is a little simpler as we can use a lot of PowerShell plumbing to do the lifting for us. To set this up we use:

Get-DbaAgentJob -SqlInstance server1 | Select-Object Name, OwnerLoginName | ConvertTo-Csv | Out-File .\server1-JobOwners.csv

If you’re not a PowerShell user I’ll just break this down into steps:

  1. We user dbatools’ `Get-DbaAgentJob` cmdlet to get all the Agent jobs on Server1
  2. We pipe the ouput to `Select-Object`, we select just the job Name and it’s Owner’s login name
  3. The output is then converted to a CSV object
  4. Finally we write the date to a file in the current folder

So we’ve created a nice csv list of the original owners of our jobs, great. Now how do we use those to roll back?
To roll back every job:

Get-Content .\server1-JobOwners.csv | ConvertFrom-Csv | Foreach-Object {Set-DbaAgentJob -SqlInstance Server1 -Job $_.Name -OwnerLoginName $_.OwnerLoginName}

To rollback just one job it’s very similar, we just insert a Where-Object step into the pipeline:

Get-Content .\server1-JobOwners.csv | ConvertFrom-Csv | Where-Object -Name -eq 'My Job' | Foreach-Object {Set-DbaAgentJob -SqlInstance Server1 -Job $_.Name -OwnerLoginName $_.OwnerLoginName}

In both cases we read in our CSV, convert it from CSV into a PowerShell Object. For a single job we use Where-Object to filter down to that one job, you could also use like if you wanted to pattern match. The remain records then get piped through a ForEach-Object loop, where we use Set-DbaAgentJob to reset the jobs owner.

T-SQL

The pre setup for this one involves creating and populating a table to store the old values:

CREATE TABLE JobOwnerHistory(
  Job_Id char(36),
  JobName nvarchar(128),
  OwnerLoginName nvarchar(50)
)

INSERT INTO JobOwnerHistory
  SELECT 
    sj.job_id, 
    sj.JobName,
    sp.name as 'OwnerLoginName' 
  FROM 
    msdb..sysjobs sj 
    INNER JOIN sys.server_principals sp on sj.owner_sid=sp.sid

So now, resetting a job’s owner is just a modification of our original script:

DECLARE @job_id char(36)
DECLARE @OwnerLoginName varchar(50)
DECLARE JobCursor CURSOR
FOR
SELECT 
  job_id,
  OwnerLoginName
FROM
  JobOwnerHistory
--WHERE JobName LIKE '*Finance*'
OPEN JobCursor 
FETCH NEXT FROM JobCursor INTO @job_id, @OwnerLoginName
WHILE (@@FETCH_STATUS <> -1)
BEGIN
exec msdb..sp_update_job
    @job_name = @job_id,
    @owner_login_name = @OwnerLoginName
FETCH NEXT FROM JobCursor INTO @job_id, @OwnerLoginName
END 
CLOSE JobCursor 
DEALLOCATE JobCursor 

As written that will reset every job in JobOwnerHistory, if you want to filter down to a subset of tables you’d uncomment and modify the WHERE line

Hope those examples are helpful. If you’ve stumbled across this page and it doesn’t quite fix your problem, please drop a comment and I’ll try to lend a hand.

Bulk changing SQL Server Agent job owners with dbatools and T-SQL

Now we all know that having SQL Server Agent jobs owned by ‘Real’ users isn’t a good idea. But I don’t keep that close an eye on some of our test instances, so wasn’t surprised when I spotted this showing up in the monitoring:

The job failed. Unable to determine if the owner (OldDeveloper) of job important_server_job has server access (reason: Could not obtain information about Windows NT group/user 'OldDeveloper', error code 0x534. [SQLSTATE 42000] (Error 15404)).

Wanting to fix this as quickly and simply as possible I just wanted to bulk move them to our job owning account (let’s use the imaginative name of ‘JobOwner’).

I have 2 ways of doing this quickly. First up is an old T-SQL script I wrote years back (yeah, this happens all the time)). In this case we’re looking for all the jobs owner by the user OldDeveloper and moving them to the JobAccount user. It uses a cursor, but they aren’t always the spawn of evil 🙂

DECLARE @job_id char(36)
DECLARE JobCursor CURSOR
FOR
SELECT job_id  FROM msdb..sysjobs sj 
    INNER JOIN sys.server_principals sp on sj.owner_sid=sp.sid
    WHERE sp.name = 'OldDeveloper'
OPEN JobCursor 
FETCH NEXT FROM JobCursor INTO @job_id
WHILE (@@FETCH_STATUS <> -1)
BEGIN
exec msdb..sp_update_job
    @job_name = @job_id,
    @owner_login_name = 'JobAccount'
FETCH NEXT FROM JobCursor INTO @job_id
END 
CLOSE JobCursor 
DEALLOCATE JobCursor 

And then, of course, there’s a simpler dbatools method in PowerShell:

Get-DbaAgentJob -SqlInstance server\instance | Where-Object {$_.OwnerLoginName -eq 'OldDeveloper'} | Set-DbaAgentJob -OwnerLogin JobAccount

Use Get-DbaAgentJOb to get all the SQL Agent jobs from your instance, a quick pipe through Where-Object to filter down to just the ones with a ‘bad’ owner, and finally into Set-DbaAgentJob to set the new owner login.

You can use whichever you prefer as they’ll do the same job. I find the PowerShell/dbatools the easiest one as I can remember the syntax, the T-SQL one I have to remember where I left the original copy 😉

Can’t find a recently deployed SSIS package

Had a developer wander across confused about where the SSIS package he’d just redeployed had gone. No errors when he’d deployed the package to Integration Services, but when he looked at the properties of the package (or at least the one he thought he’d uploaded) it was showing a deployment date of 3 weeks ago.

A handy feature of having SSIS now using SSISDB to store it’s packages is that we can now query it to find out where it had gone with some simple T-SQL. In this case all I needed to do was to ask:

select 
	ip.name, 
	id.name as 'FolderName', 
	ip.deployed_by_name, 
	ip.last_deployed_time 
from 
	internal.projects ip inner join internal.folders id on ip.folder_id=id.folder_id
order by 
	ip.last_deployed_time desc

And I could see that the last time he’d deployed it, it had gone into Folder Y rather than Folder X. So after a quick clean up and redeploy everything was as he expected.

How are SQL Server Log Sequence Numbers (LSNs) generated?

If you’ve ever dug down in the SQL Server transaction logs or had to build up restore chains, then you’ll have come across Log Sequence Numbers (LSNs). Ever wondered why they’re so large, why they all look suspiciously the same, why don’t they start from 0 and just how does SQL Server generate these LSNs? Well, here we’re going to take a look at them

Below we’ll go through examples of how to look inside the current transaction log, and backed up transaction logs. This will involve using some DBCC commands and the undocumented fn_dblog and fn_dump_dblog function. The last 2 are very handy for digging into SQL Server internals, but be wary about running them on a production system without understanding what’s going on. They can leave filehandles and processes behind that can impact on your system.

So with that warning out of the way, let’s push on

Read More

Complex SQL Server restore scenarios with the dbatools Restore-DbaDatabase pipeline

dbatools logoNo matter how hard the dbatools team try, there’s always someone who wants to do things we’d never thought. This is one of the great things with getting feedback direct from a great community. Unfortunately a lot of these ideas are either too niche to implement, or would be a lot of complex code for a single use case

As part of the Restore-DbaDatabase stack rewrite, I wanted to do make things easier for users to be able to get their hands dirty within the Restore stack. Not necessarily needing to dive into the core code and the world of Github Pull Requests, but by manipulating the data flowing through the pipeline using standard PowerShell techniques All the while being able to do the heavy listing with out code.

So, below the fold we’ll be looking at some examples of how you can start going to town with your restores

Read More

Debugging the new dbatools Restore-DbaDatabase pipeline

A new version of dbatools Restore-DbaDatabase command was released into the wild this week. One of the main aims of this release was to make it easier to debug failures in the restore process, and to drag information out of the pipeline easily (and anonymously) so we can increase our Pestering of the module with Unit and Integration tests.

So I’d like to share some of the features I’ve put in so you can take part.

The biggest change is that Restore-DbaDatabase is now a wrapper around 5 public functions. The 5 functions are:

  • Get-DbabackupInformation
  • Select-DbabackupInformation
  • Format–DbabackupInformation
  • Test–DbabackupInformation
  • Invoke-DbaAdvancedRestore

These can be used individually for advanced restore scenarios, I’ll go through some examples in a later post.

For now it’s enough to know that Restore-DbaDatabase is a wrapper around this pipeline:

Get-DbabackupInformation |Select-DbabackupInformation | Format-DbabackupInformation | Test-DbabackupInformation | Invoke-DbaAdvancedRestore

and it’s other function is passing parameters into these sub functions as needed.

With version of Restore-DbaDatabase you were restricted to throwing data into one end, and seeing what came out of the other end, with some insight produced by Verbose messages. Now things can be stepped through, data extracted as need, and in a format that plugs straight into out testing functions.

Get-DbaBackupInformation

This is the function that gets all of the information about backup files. It scans the given paths, and uses Read-DbaBackupHeader to extract the information from them. This is stored in a dbatools BackupHistory object (this is the same as the output from Get-DbaBackupHistory, so we are standardising on a format for Backup information to be passed between functions).

So this would be a good place to check that you’ve gotten the files you think you should have, and is also the first place we’d be looking if you had a report of a break in the LSN chain

To get the output from the pipeline at this point we use the GetBackupInformation parameter:

Restore-DbaDatabase - -GetBackupInformation gbi

This will create a globally scoped variable $gbi containing the ouput from Get-DbaBackupHistory. Note, that when passing the name to Restore-DbaDatabase you do not need to specify the $.

If you want to stop execution at this point, then use the -StopAfterGetBackupInformation switch. This will stop Restore-DbaDatabase from going any further.

This is also a good way of saving time on future runs, as the BackupHistory object can be passed straight in, saving the overhead of reading all the file heasers again:

$gbi | Restore-DbaDatabase [Usual Parameters] -TrustDbBackupHistory

Select-DbaBackupInformation

Here we filter down the output from Get-DbaBackupInformation to restore to the point in time requested, or the latest point we can. This means we find :
– the last full backup before the point in time
– the latest differential between the full backup and the point in time
– and then all transaction log backups to get us to the requested time
This is done for every database found in the BackupHistory object

Here is where we’d begin looking for issues if you had a ‘good’ LSN chain from Get-DbaBackupInformation and then it broke.

To get this data you use the SelectBackupInformation parameter, passing in the name of the variable you want to store the data in (without the $ as per GetBackupInformation above)

There is also a corresponsing StopAfterSelectBackupInformation switch to halt processing at this point. We stop processing at the first stop in the pipeline, so specifying multiple StopAfter* switches won’t have an effect

Format-DbaBackupInformation

This function performs the transforms on the BackupHistory object per the parameters pushed in. This includes renaming databases, and file moves and rename. For everything we touch we add an extra property of Orignal to the BackupHistory object. For example the original name of the database will be in OriginalDatabase, and the target name will be in Database

So this is a good spot to test why transforms aren’t working as expected.

To get data out at this pipeline stage use the FormatBackupInformation paramter with a variable name. And as normal it has an accompanying StopAfterFormatBackupInformation switch to halt things there

Test-DbaBackupInformation

Before passing the BackupHistory object off to be restored we do some checks to make sure everything is OK. The following checks are made:

  • LSN chain complete
  • Does a destination file exist, if owned by a different database then fail
  • Does a destination file exist, if owned by the database being restored is WithReplace specfied
  • Can SQL Server see and write to all the destination folders
  • Can SQL Server create any destination folders missing
  • Can SQL Server see all the backup files

If a database passes all these checks then it’s backup history is marked as restorable by the IsVerified property being set $True.

To get the data stream out at this point use the TestBackupInformation parameter.

General Errors with restores

Once we’re past these stages, then our error reporting is at the mercy of the SMO Restore class. This doesn’t always provide an obvious cause straight away. Usually the main error can be found with:

$error[1] | Select-Object *

We like to think we capture most restore failure scenarios nicely, but if you find something we don’t then please let you know, either on Slack or by raising a Github issue

As usually the dbatools terminating error will be in $error[0].

Providing the information for testing or debugging.

If you’re running in to problems then the dbatools team may ask you to provide the output from one of these stages so we can debug it, or incorporate the information into our tests.

Of course you won’t want to share confidential information with us, so we would recommend anonymising your data. My normal way of doing this is to use these 2 stubbing functions:

So if we’ve asked for the Select-DbaBackupInformation the process would be:

Restore-DbaDatabase -[Normal parameters] -SelectBackupInfomation sbi -StopAfterSelectBackupInformation
Filter-DbaToolsHelpRequest $sbi
$sbi | Export-CliXml -Depth -Path c:\some\path\file.xml

And then upload the resulting xml file.

This method will anonymise the values in ComputerName, InstanceName, SqlInstance, Database, UserName, Path, FullName, FileList, OriginalDatabase, OriginalFileList, OriginalFullName and ReplaceDatabaseName. But will produce the same output for the same input, so we can work with multiple database sets at once.

I hope that’s been of some help. As always if you’ve a question then drop a comment below, ping me on twitter (@napalmgram) or raise an issue with dbatools on Slack or Github

Creating a Time Series database for SQL Server Restore testing

As a prelude to a post describing some new features of the dbatools Restore-DbaDatabase function, I thought I’d just got through how I create a time series database to give me something to test restoring to Point in Time with minimal overhead.

This sort of database can also be really handy for your own training if you want to play with point in time restores, or explore what can be done with standby restores.

Premise

I want to end up with:

  • A database – basic unit of restore
  • Single table – Let’s keep this small and simple
  • Table contains a series of rows written at a set time interval

So the table should end up like this:

StepID DateStamp
3 08/06/2017 09:46:45
4 08/06/2017 09:47:15

We should end up with a nice spread of times so we can easily say ‘Restore to this point in time’ and then be able to check we’ve actually gotten there

Read More

Track SQL Server Index Usage over time

As part of my talk at the Nottingham SQL Server User group last night, I offered some tips on how to track SQL Server index usage over time.

sys.dm_db_index_usage_stats is a very handy table for seeing how many seeks/scans and updates your indexes get. But there are a few issues with. Firstly it’s a DMV, so it will be flushed on a number of events:

  • SQL Server Instance Restart
  • Database Detach
  • Database Close
  • SQL Server 2012/2014 – Alter Index rebuild

For a good thorough check normally a full months worth of stats are needed, this normally covers most of the business processes. I may event be useful to compare stats across months so you also capture events such as Financial Year end.

A better break down of when the scans and seeks happened can be very useful. It’s all well and good knowing your index was scanned 5000 times and Seeked across 15000 times, but what processes were doing that? Was it OLTP workload during 9-5 or was it the overnight processing? This could make a big difference to any planned Index changes.

So here’s a technique I use. It’s pretty simple and allows you to tune it to your own needs,

Read More

Easier SQL Server Restores using DBATools

T-SQL Tuesday So for this month’s T-SQL Tuesday Kenneth asked us to talk about backups.

If you’ve been reading my blog for a while, you’ll be aware that I have a ‘thing’ about backups and restores (31 days of posts in a row on a topic shows a slight interest!), and I’ve been speaking at SQL user groups on the subject for years as well.

Why?

During my whole DBA career I’ve been aware that the absolute worst thing that can happen to me is having to admit to my employer that I can’t recover their precious data to the point they want (RPO), or do it in a time that they’re happy with (RTO).

To me, if either of those happens it’s a Resume Generating Event! And that’s an event I’ve spent years avoiding. And having those bases covered also stops other mistakes (what, I ran that DROP TABLE in prod?) from becoming RGEs as you’ve got a safety net.

As I’ve moved into management, I’ve also come to realise it’s even worse when you hear from someone who reports to you that they aren’t 100% sure about their backups, this could be 2 RGEs for the price of one!

So I’ve always been hugely keen on backups, and more importantly on ensuring that they’ve been tested and you’ve practiced your restores as much as possible.

SQL Server makes it very easy to perform backups, plenty of options to perform and unless you’ve a fairly large or specialised database setup then there’s fairly little setup required.

Restoring a database, now thats a whole other story. If you’re like most people, you’re taking a full backup each night and then transaction log backups hourly through the day. So at worst case you’re got 25 files to restore.

On a larger system you could have a weekend full backup a twice daily diff backup and 10 minute transaction log backups. Now, these things always fail just before the next full backup so you’re looking as 1 full backup, 13 differential backups and 71 transaction backups to process and restore! Now that’s going to take a while to script out and get going!

SSMS will help you out if you have access to the server that made the backups and it can see msdb, if that whole instance is now a smoking crater, all you’re going to have is a folder full of backup files.

So what to do about it?

Read More

Deleting Large amounts of data in SQL Server – Part 2

In the last part (Part 1) we looked at a reasonably generic script for deleting large amounts of data from SQL Server that helps when you issue a delete statement and SQL Server just sits there with your transaction logs filling up and refusing to clear out.

This time, we’re going to look at how to deal with that wonderful error message of:

Msg 547, Level 16, State 0, Line 1
The DELETE statement conflicted with the REFERENCE constraint "FK__address__personi__239E4DCF". The conflict occurred in database "dbgrow", table "dbo.address", column 'personid'.
The statement has been terminated.

Which is SQL Server’s way of letting you know that the data you’re trying to delete is referenced as part of a Foreign Key relationship, so can’t be deleted unless you delete the record on the other end of the relationship.

So how do you work around this?

Read More

Page 1 of 3

Powered by WordPress & Theme by Anders Norén