Yesterday we looked at the basics of using the Backup-DbaDatabase function. In all of those examples the backup files were going to the default backup location of our SQL Server instance, and using the default name generation process of Backup-DbaDatabase.
This often isn’t what you want, so we offer you plenty of options on how to create and handle paths
There is one other type of backup that is very important to be aware of, the COPYONLY backup. When a SQL Server database is in full recovery mode you want to make sure that you don’t interrupt the backup chain. This can ruin your chances of performing a successful restore.
For instance, if you’re using Differential backups. Each differential backup is the sum of all changes from a specific full backup. If you take a new backup, all subsequent differentials are based on that one. That can be a problem if you’re just grabbing a backup to refresh test or to investigate an issue.
A COPYONLY backup does not break the chain, so it can be removed or deleted once it’s no longer needed without breaking your recovery plan
To take one of these, just use the -CopyOnly switch:
Maybe you want to try something out and see what’s going to happen, but you don’t want to actually do the backup?
That’s what the -OutputScriptOnly switch is for. This will go through all the same processing as a full run would, the only difference is that Backup-DbaDatabase will just spit out the T-SQL without running the command.
will not perform a backup, but will produce this for you:
BACKUP DATABASE [roles1] TO DISK = N'/var/opt/mssql/data\roles1_202004281714-1-of-3.bak', DISK = N'/var/opt/mssql/data\roles1_202004281714-2-of-3.bak', DISK = N'/var/opt/mssql/data\roles1_202004281714-3-of-3.bak' WITH NOFORMAT, NOINIT, NOSKIP, REWIND, NOUNLOAD, STATS = 1
We’ll be covering what some of those other options are in a post in a couple of days
Backup-DbaDatabase produces a lot of output that can be of interest. Quite a lot of it isn’t shown at first glance so we’ll take a look at what’s available:
With the current pandemic enforced usage of Teams over chatting at the water machine I wanted a way to hide the background.
My wife’s a nurse, so the home ‘office’ is hers for patient confidentiality atm, which means I’m nomading around the house. This means my background varies a lot, so having something different than an untidy shelf of books, the kitchen counter or a blinding window glare behind me would be nice
There’s plenty of windows hints on having a custom Teams background on Windows, but not so much about having them on a Mac. With a bit of digging it turn out that if you want a custom Teams background on your Mac it’s quite simple after all.
Just drop the images you want Teams to use as a custom background into that folder and they’ll show up in the gallery.
It seems Teams isn’t picky about which file format you use for a custom background. I’ve tried png, jpeg and gif, and they all work fine. Unfortunately animated gifs don’t work, which is both a good and a bad thing depending on your personal love of gifs
With a number of the large in person Data Platform conferences and meetups having to cancel or take a hiatus due to the current Covid-19 outbreak, Matt Gordon (b | t) and myself decided to do something about it.
So we came up with Data Platform Discovery Day. A 2 day online event aimed at those people who are looking for a start in the Data Platform world or are wanting to learn some fundamentals about an aspect of it.
Each day with run alongside business hours across a different continent, and will feature ten 50 minute sessions. Each day will have different speakers, and if you want there’s nothing to stop you attending both if you fancy an early morning or late night
Day 1 is running during US daytime on 29th April. The first session will begin at 9am Eastern Time, and then a new session will being at the top of each hour
Day is running during European daytime on 30th April. The first session will begin at 9am BST (UTC+1), again with a new session starting at the top of each hour.
Registration for the events with start once we’ve finalised the speakers.
And talking of speakers, the Call for Papers for both events are open. You can submit to both events if you want. Please remember that sessions are 50 minutes, and content should ideally be Level 100 material.
So I’m busy working on my new dbaSecurityScan module, busily trying to write all the pester tests for the appveyor pipeline
Part of the testing for the module involves building a number of test scenarios, each of which of which needs a database spinning up for it.
On the Windows builds, that’s nice and easy as I just use this in the test script to loop through all the scenarios and run the .sql files :
However, I’m developing on my Macbook Pro as I want this module to be nicely platform agnostic. SQL Server is running nicely in docker and dbatools can connect to it happily. But I just can’t get sqlcmd to work, lots of TCP Provider: Error code 0x102 and TCP Provider: Error code 0x2AF9 messages no matter what I try. So being short on time I though I’d try something else, and hopefully this will work when I get Linux build running as well
The scripts I want to run create databases, users, schemas and whole lot more, so there’s lots of batch separators (; and GO) in them.Invoke-DbaQuery is a great function, but it doesn’t like working with batch separators (which is not through want of trying, it’s just really tricky). So time to drop back to some raw SMO to run the scripts in with ExecuteNonQuery()
I’ve added a variable to my builds that lets me pick when I want to run via SQLCMD or via SMO. At the minute I’m just using it on my machine, but if I run into problems later, it’s nice to now I can just toggle by looking at which platform it’s running.
I’m currently migrating a lot of SQL Server instances onto newer virtual machines. Quite a few of these instances talk to each other via Linked Server for various historical reasons. And a lot of that chat is done via distributed transactions, which means configuring MS Distributed Transaction Coordinator.
So one of the first things I need to do, is check that that DTC is working between the 2 boxes. This is dead simple with official PowerShell module for msdtc. It’s as simple as running Test-Dtc and then working through any errors. So, we just run this:
and as this is a post about fixing something, it won’t suprise you that I’m going to get an error message:
"The OleTx CID on SOURCE and DESTINATION is the same. The CID should be unique to each computer."
At C:\Windows\system32\WindowsPowerShell\v1.0\Modules\MsDtc\TestDtc.psm1:266 char:13
+ throw ([string]::Format($Strings.SameCids, "OleTx", $Loca ...
+ CategoryInfo : OperationStopped: ("The OleTx CID ...each computer.":String) , RuntimeException
+ FullyQualifiedErrorId : "The OleTx CID on SOURCE and DESTINATION is the same. The CID should be uniqu
e to each computer."
But in much more fetching shared of red and yellow, announcing that “The OleTx CID on SOURCE and DESTINATION is the same. The CID should be unique to each computer.”
The cause is really simple to grasp. When the Distributed Transaction Coordinator is installed it registers a GUID to identify it, the theory being a GUID clash should be a vanishingly rare occurance.
That is, until someone’s cloning Virtual Machines. So I have a batch of shiny new VMs that all think they’re the same instance of DTC. That’s not so good. It used to be the accepted fix was to manually remove the the distributed transaction coordinator, clean the registry, restart and then reinstall everything. That sounds like a lot of work to me!
The msdtc module makes it very simple to do, so we’re staring off here:
So you’ve got the when, and you’ve got the who, now how do you precede? Remember you could be holding someone’s job or professional reputation in your hands.
As a long time DBA I’m well aware that applications are not always the best written things, and instructions don’t always spell out correctly what an option is going to do.
In the case I was writing about I withheld the name from general knowledge, a single senior manager was made aware that I knew and would be investigating. Why did I do this?
In the case of this particular third party application , there were a couple.
Deleting data within the app should write data to an internal audit trail, this hadn’t
Users cannot generate their own SQL queries, so this wasn’t an ad hoc query either
The single row delete was in it’s own transaction
This particular app is badly written. We’ve seen it fail to correctly write it’s own ini files on exit
It has a habit of crashing out for no reason
So a bit like a lot of applications out there. Now I had some time to investigate I started by going through logs on the client Windows machine and trying to replicate on my own.
Eventually I traced it back to a coding error in the application where they processed an Update as:
Delete old data
Insert new data
Write audit log
None of this happened in an encompassing transaction (from studying the Transaction Log). So when the app crashed just after step 1, the row was just left missing and nothing there.
To the user who was logged in at the time, they wouldn’t have been aware of the crash as the GUI doesn’t display app status. The crash also happened at 16:55, which is pretty much home time so they wouldn’t have looked at the record again
Now I had the evidence I could talk to the Senior manager and show that we needed to raise this with the supplier, and that the user was blameless, but that we should recommend to users that they should double check any updates to be sure.
Releasing the name to a less trusted source without this background information would have probably led to the user being blamed for something that wasn’t really their fault
So, just because you have the information it doesn’t mean it’s actionable with adding more context and knowledge around it.
It’s all been a bit quiet around here with lots on at work and training for a marathon. One piece of new I’ve not menitoned is that I’m now CISSP certified 👍. I sat the exam in December, but it’s taken a while for the paperwork and accreditation to get sorted and for everything to become official
The main thing I’ve taken away from studying for the certification is that process rules everything. As an in the trenches DBA we’re more worried about the specifics of SQL Server permissions or the Oracle auiting is correct. But why are we doing those things, and are we doing them correctly?
Correctly doesn’t just mean technically correct, we’re almost certainly doing them that way or things will break. What I mean is are we correctly implementing the processes and policies that drive the rest of the business?
For example, everyone loves backups (yeah, I know, this is my favourite thing as well). But how much should you be keeping? Do you really need 3 years of backups? Is that just increasing the amount of data you could loose in a breach? Are the older backups encrypted, and would restoring and encrypting them break the purposes they were kept for?
If you’re never going to use them why are you keeping them? If it’s just for a CYA audit reason, then why not just keep the audit logs? Less chance of leaking PII or Financial data if you’ve just got the bare bones of X did Y on Z.
This alignment with Organisational policy is a core reqiurement for doing SQL Server security correctly. It will allow you to concentrate on exactly what needs to be done and not just running around implementing ad-hoc fixes every time a hole appears.