T-SQL Tuesday

T-SQL Tuesday time again. This month SQLChow (b|t) is has set us the homework assignment to talk about SQL Sever Automation. So here’s my take on the subject:

Every job has those mundane repetitive tasks than you HAVE to do, day in day out.

As SQL Server DBAs we have a particularly long list of things that we have to do to make sure our databases are working properly, and also to ensure that when they’re not we can fix them quickly enough that we’re not looking for a new job!

I’m sure all DBAs have a list of things they have to repeatedly. Almost all of us could agree on these as some of the most common:

  • Performance monitoring/logging
  • Log Monitoring
  • Backups
  • Refreshing Test and Dev instances
  • Restores (please, please start automating these if you aren’t already)
  • Integrity check
  • and many, many more

For each of these you’ve probably got a preferred way of automating them. Perhaps you’re using maintenance plans for some, some automated T-SQL scripts for others or even dabbling with PowerShell (my personal favourite (as you may have noticed)).

When you first start at automating the tasks it can seem like a huge task to get them all done. Now not only are you having to do all your morning checks but describe them to a script as well. At this point a lot of people’s attempts to automate their workload starts to flounder as it appears to be a doubling of workload that even the prospects of a future reduction in workload can tempt them to overcome the challenge.

Instead of seeing this as yet another dull task, try to make it interesting. When I’m asked to automate a process, then I’m going to do some work before hand to find out what we actually want from the process, why we’re doing it and how might it change in the future. This usually cause the neurons to fire up, and suddenly it’s a less dull task, and there’s a good chance I’ll learn something new.

Backups are one of those no brainer options. You’re doing them because you have to have them, you want reliable backups, and other than potential new options (like cloud backups with 2014) there’s not going to be a huge amount changing in the future (assuming your solution can cope with new databases appearing on your instance).

On the other hand, how about something like performance monitoring and logging? What’s the goal here:

  • are we just wanting to store lots of data in a table somewhere to use the day after an incident?

OR

  • do we want to have something with a little bit of intelligence that it can spot trends and alert us?

One of these choices gives us a quick potentially dull win, the other opens up something that could be interesting to develop and make life a lot easier in the future.

And then a choice of how you’re going to write it.

  • Are you happy to write something that monitors the metrics you want to record now for the handful of databases you care about.

OR

  • how about spending a bit more time writing something that takes in an easily updated list of metrics you want to capture, and takes in parameters about which databases and servers to monitor?

It might seems a lot of overhead now, but when the SQL Server estate you manages balloons in size or Microsoft adds some new Perfmon counters you can add them with minimal effort.

And it’s much easier to take this code with you when you move onto pastures new with a new employer or a new contract. It’s also much easier to write a series of blog posts and share with the world if you’ve made it flexible enough that you can share it.

Perhaps while doing this work, you’ll also stumble across a better way of doing the task you have been doing. Doing something by rote can often end up with you just doing it the way you learnt it on day 1, and better ways of doing the same task pass you by. By stepping back and reviewing what you’re trying to do you may come up with a much more efficient way of doing something.

As a parting note, as a PowerShell fan I’d also strongly recommend that if you’re going to us it, then please follow a lot of the PowerShell coding best practices, they’ll really help you in the future if you need to extend or repurpose your solution:

  • Write functions, not scripts
    • Don’t write lots of long single purpose scripts, instead break them into smaller functions. It makes debugging easier, and tends to product a lot of code that’s easy to reuse. So your next project will be much faster off the ground. If you write enough functions, then you can look at wrapping them all up into a module.
  • Don’t use Write-Host
    • If you’re writing functions and cmdlets, then you’re going to want to pass information between them. And if you start using Write-Host there’s a good chance your output won’t be going where you want it to. Instead pass objects around, or use Write-Output and allow the code to work out the correct place to output it to.
  • Lots of Comments
    • If you automate enough things, eventually you’ll forget something. Save yourself (or your colleague’s) time by documenting things as you go along
  • Don’t use shorthand.
    • It might save a few keystrokes substituting gci for Get-ChildItem or ? for Where-Object, but it’ll confuse the hell out of someone else at some point.

Anyone who may have to pick up your scripts at a later point will thank you if you have followed them.

So go on, perhaps that incredibly dull task you spend 30 minutes on every morning is ripe for an overhaul. Examine it from every angle, and work out the best way to make it the scheduler’s job and not yours. And hopefully you’ll have learnt something interesting along the way!