Musings of a Data professional

Stuart Moore

Category: sql server Page 1 of 5

Bulk renaming SQL Server tables with T-SQL or dbatools

While setting up for a large migration project involving a third party I had to move a load of tables out of the way to accommodate a sudden change in approach. This happens fairly often with our dev environments. So I thought I’d share a couple of approaches I use to do this. First up , renaming tables using T-SQL and then renaming tables with dbatools

WARNING!!!!!

Renaming any SQL Server needs to be done carefully. Any existing references to the object will break. So any view or stored procedure referencing the table will stop working until you fix it.

This can be a useful trick if you’ve been told you can remove some tables. Rename them, leave them for a day, and then if anyone screams you can rename them back

Renaming tables with T-SQL

This one is good for:

  • simple rename options
  • when you don’t have dbatools available

To do this we’re going to generate the SQL statements using system views and then running them.

Lets say we have a bunch of tables you want to rename. For the first example we want to prefix a bunch of tables whose names contain ‘user’ with ‘tmp_’ while other testing goes on in the database

The key to this is using sys.all_objects. To get our list of tables to rename, we use some simple T-SQL:

select name from sys.all_objects where name like '%user%' and type_desc='USER_TABLE';

Adding something to specify the type of object is ALWAYS a good idea, just to avoid trying to rename a stored procedure or system table.

Now use the T-SQL to build up sp_rename statements:

select ''exec sp_rename '''+name+''', ''tmp_'+name+''';' where name like '%user%' and type_desc='USER_TABLE';

Which will give you a nice little list of SQL statements that you can then copy and paste into a query window to run.

Renaming tables with dbatools

Whilst the dbatools module doesn’t have a Rename-DbaTable command (yet!), you can still use dbatools to rename tables with a little bit of added Powershell

The first part of the pipeline is going to be grab all the tables from the database using Get-DbaDbTable. Then pass that into Where-Object to filter down to the tables we want. Then into a ForEach loop for the actual rename which we do with the SMO rename() method:

$instance = 'server1\instance'
$db = 'testdb'
Get-DbaDbTable -Sqlinstance $instance -Database $db | Where-Object {$_.name -like '*user*'} | ForEach {$_.Rename('tmp_'+$_.name)}

Being Powershell there’s a lot of options you can use to create the new name, which makes this more flexible that the T-SQL version above.

 

Day 21 – SQL Server 2000 and Restore-DbaDatabase – 31 days of dbatools backup and restores

SQL Server 2000 and Restore-DbaDatabase

dbatools likes to support as many users as we possibly. It’s written for DBAs who may have no choice but to run older versions of software or don’t want to have to constantly upgrade everything even if they could.

This means we support PowerShell from 3.0 upwards, and we try to go back as far as we can with SQL Server. Due to some major changes between SQL Server 7 and SQL Server 2000 (I was there, and it was a big change), we can only go back to SQL Server 2000.

The change from SQL Server 2000 to SQL Server 2005 was another massive jump in terms of the Storage Engine and other components. The one that causes dbatools’ Restore-DbaDatabase the most issues was a change to the way headers were read from backup files. It appears in SQL 2000 some extra headers were added to the write, but not to the read action.

This means that despite the information being in a SQL Server 2000 backup, you can’t read it with a SQL Server 2000 instance.

Also, the SQL Server backup history in msdb have a major rewrite at the same time. Working that into Get-DbaDbBackupHistory would make that function even harder to maintain.

Fear not, we have a solution. Get-DbaBackupInformation can read the information from the SQL Server 2000 backup headers, as long as you use a more modern version of SQL Server to read the files. It doesn’t matter which edition or version you use, so feel free to spin up whichever copy of SQL Sever Express -gt 2005 and follow along:

    $backupHistory = Get-DbaDbBackupHistory -SqlInstance server\sql2008 -Path \\server2\files$\sql2000\db1 -Recurse
    $backupHistory | Restore-DbaDatabase -SQLInstance sql2000 -DatabaseName db2 -ReplaceDbNameInFile

If you think this is going to be a regular occurence, then to speed things up you might want to have a look at some of the strategies I outlined in Day 16’s post Recreating Backup History to have this information to hand without scanning lots of files

Conclusion

As much as we want to support as many versions transparently as we can, there are some limits. But we still to support those who are left on SQL Server 2000 if we can.

All posts in this series can be found at 31 Days of dbatools Backup and Restores

Day 19 – SQL Server Page restores with Restore-DbaDatabase – 31 days of dbatools backup and restores

Page restores with Restore-DbaDatabase

Oops, missed a day yesterday. In my defence repairing our payment system is pretty important 🙂

You may not alway want to restore a whole database. A common piecemeal way of restoring a database is just restoring pages that have become corrupted

What is a page? All data in SQL Server is handled in units called pages. A SQL Server page is an 8kb allocation of disk or memory. Memory? Yes, it make things easier if you use the same size for both as you don’t need to convert or split things up, which will only slow things down.

When I say all data in SQL Server is store in page size chunk, I mean pretty much everything, data, indexes, binary data, the SQL Server internal data defining tables, view, stored procedures and so on. The main thing not store on pages are Transaction Log records, they have their own format

Unfortunately sometimes those pages can become corrupted. This could be through drive failures, controller failues or very rarely a bug in the application.

You may even have some corrupted pages in your databases. If the page isn’t read very often then you might not notice it’s corrupted. But at some point you will.

Don’t think, that just because you can can backup and restore a database that it’s guaranteed corruption free, it may still sneak through.

The only way to be sure is to run regular DBCC checks. Really you want to be running it nightly. Unfortunately, a DBCC check involved reading every page of data, so has an impact on your SQL Server instance’s performance. You can can offload the checks to other instances by restoring the database, but this can make tracking down the root cause of the corruption harder (Brent Ozar with a timely post on this very subject)

So what do you do when you find you’ve got corruption?. You first need to find out when the corruption occured, this is why it’s important to check regularly as you want to know ASAP not weeks after the fact.

We’re going to restore just the corrupted pages from our SQL Server backups using Restore-DbaDatabase. We need to go from the full backup BEFORE the corruptions occured, and then roll the pages forward using the transaction logs. If you catch the corruption early there’s less work for SQL Server to do to recover these, finding out a week after your last full backup can make this a lengthy process.

To find just the pages we need to restore we’re going to use another dbatools command, Get-DbaSuspectPage. This looks at the msdb.dbo.suspect_pages table, which keeps a log of all pages in the SQL Server instance which have been marked as suspect. Keeping an eye on this table can help spot corruption, but unless every page in your database gets read regularly it won’t pick up every bit of potential corrupt.

So, your luck’s run out and you’ve found you’ve a lot of corrupted data pages. Hopefully this is just after a full backup.

First we run Get-DbaSuspectPage to find out how bad it is and which pages are suspect:

    $pages = Get-DbaSuspectPage -SqlInstance server1\instance -Database MyProdDb

The number of pages ($pages.count) will give you an idea of how bad things are.

For the next step you need to know which Edition of SQL Server you’re running:

  • If you’re running Enterprise Edition, then you can do this online. Users may notice a drop in performance, but will be able to access the database.
  • If you’re running Standard Edition, then unfortunately it’s an offline process. Users will not be able to use the database until the restore and recovery have finished.

SQL Server won’t warn you, so check before you run this. This will help you check:

Connect-DbaInstance -SqlInstance server1\Instance | Select-Object -Property Edition   

If you need to, let your users know

Now we’re going to use our most recent backups to recover those pages:

    Get-DbaDbBackupHistory -SqlInstance Server1\instance -Database pagerestore -last | Restore-DbaDatabase -SqlInstance server1\instance -PageRestore $pages -TrustDbBackupHistory -DatabaseName MyProdDb -PageRestoreTailFolder c:\temp     

I’m using Get-DbaDbBackupHistory to get the BackupHistory to speed things up. You can pass in files if you want to, but it will take a bit longer

There’s a couple of new parameters on the Restore-DbaDatabase command which are only used with this type of restore:

PageRestore

This lets Restore-DbaDatabase know that we’re going to be doing page restores by passing in a list of the pages than need restoring. In this case we’re passing in the output from Get-DbaSuspectPage as it’s the easiest way

PageRestoreFolder

As part of the recovery phase for a page restore SQL Server needs to take a tail log backup of the database and then restore it. The folder passed in via this parameter is the location that the tail backup will be put in. So make sure SQL Server can read and write to the location and that you have plenty of space to use.

As soon as the Page restore has completed, take a full backup of your database. Now if anything breaks you can start from a known good point to start from. Definitely start looking at why the corruption may have occured, sometimes it’s just plain bad luck other times it points to issues in applications or hardware, if you can find the culprit and remove it now it’ll be so much better.

Conclusions

SQL Sever database corruption happens, not often, but it can happen so it’s best to be prepared to jump on it as soon as it appears. So CHECKDB regularly and keep an on msdb.dbo.suspect_pages. If you start seeing lots of corruption then dig down to find out what’s causing it.

All posts in this series can be found at 31 Days of dbatools Backup and Restores

Day 18 – Ola Hallengren maintnance solutions and Restore-DbaDatabase – 31 days of dbatools backup and restores

Restoring Ola Hallengren maintenance solutions backups

At dbatools we’re great fans of Ola Hallengren’s maintenance solution for dealing with backups and index maintenance. We even have a command to install and update his solution for you – Install-DbaMaintenanceSolution

If you’re using Ola’s solution then we can make some assumptions about your backups, which means we can offer a few more options.

If you execute the following:

Restore-DbaDatabase -SqlInstance Server1\instance -Path \\backups\files$\db1 -MaintenanceSolution

The we know you’re pointing us at a backup folder created by Ola’s solution. This means we know already to restore down into the folder. This would be the equivalent of

    Restore-DbaDatabase -SqlInstance Server1\instance -Path \\backups\files$\db1 -DirectoryRecurse

which isn’t too big a saving. But if you’re wanting to skip certain types of restores this can really speed things up:

    Restore-DbaDatabase -SqlInstance Server1\instance -Path \\backups\files$\db1 -MaintenanceSolution -IgnoreLogBackup

which doesn’t look much different to

    Restore-DbaDatabase -SqlInstance Server1\instance -Path \\backups\files$\db1 -DirectoryRecurse -IgnoreLogBackup

But there’s quite a performance improvement from the former over the latter. This comes about because we know that Ola’s solution lays out the folder like this:

Server
   |---Database
      |--- FULL
      |--- DIFF
      |--- LOG

Because of this with the first command we can just skip the log folder! With the second version we still have to scan all the backup headers to be sure which files are full backups, differential backups and log backups. With Ola’s solution we know already. And IgnoreDiffBackup works in the same way

You can of course configure whichever backup tool you use to put your files into the same file structure Ola uses. As long as you’re confident that you only have the right type of backups in each folder it will work.

Conclusions

Ola’s maintenance scripts are great, I use them exclusively at work and shudder when I come across an ancient maintenance place when something crawls out of the woodwork. This little switches can improve your performance if you know where you want to restore to.

All posts in this series can be found at 31 Days of dbatools Backup and Restores

Day 17 – dbatools Restore-DbaDatabase and Azure – 31 days of dbatools backup and restores

Restoring with Azure with dbatools

All our talk about restores so far has been talking about restoring to on premises SQL Server instances, from on premises filestores. How well does dbatools Restore-DbaDatabae work with Azure?

Very well in fact. In today’s post we’ll look at the options for restoring to and from Azure databases.

Day 16 – Saving and recreating BackupHistory – 31 days of dbatools backup and restores

Recreating Backup History

Yesterday we looked at using BackupHistory from Get-DbaDbBackupHistory to speed up database restores by pulling the information from SQL Server.

However, this isn’t always available. Perhaps when you’ve a hard security split between the production box and test environments, or when you’ve completely lost the original SQL Server instance in a distaster of some sorts

Today we’ll look at how you can save your backup history to disk, or recreate it from backup files ahead of time

Day 15 – dbatools and Backup History – 31 days of dbatools backup and restores

Backup History

Today we’re going to look at what underpins a lot of the work of Restore-DbaDatabase, backup history.

At the basic level it is just what it sounds like, a history of all the backups taken of a SQL Server database. Getting that history and and turning it into a restore plan is a  more complex business.

All the examples in the last 14 days have gotten the required backup history from the backup files passed in to them, either via Get-ChildItem or the Path parameter. When this happens a sub function of Restore-DbaDatabase scans every file passed, if it’s a SQL Server backup file it’s headers are read and the information passed back, if it’s not it’s ignored

The results are returned as an array of BackupHistory objects. BackupHistory is dbatools own internal object for backup history. On Day 2 of this series (Backup-DbaDatabase Basics) we saw what’s contained in a BackupHistory object, but it’s worth repeating here:

Example of data held in dbatools BackupHistory object

There’s a lot of information stored in there. A lot of it is mainly of use for the dbatools functions, but other pieces of information are handy to have access to for planning and monitoring. We created the BackupHistory object to make moving this information between dbatools functions using the pipeline. You’ll see it cropping up a lot betweent Backup-DbaDatabase, Get-DbaDbBackupHistory and Restore-DbaDatabase, and it’s used deep inside things like Copy-DbaDatabase and Start-DbaMigration.

Scanning a large number of SQL Server backup files can take a long time. If you’re taking a full database backup on a Sunday night, a differential every 12 hours and a log backup every 20 minutes, then to restore the database to Thursday at 15:35 you’re going to need to scan at least 280 files. This can take a while even on a fast machine.

Today we’re going to look at getting that information from your SQL Server Instance. Tomorrow we’ll be looking at ways of improving the speed, when all you’ve got is the files

Day 14 – Restoring a Sql Server database without recovery – 31 days of dbatools backup and restores

Restoring a SQL Server database without recovery

If you’ve not done much work with SQL Server restores, a little bit of background may help with todays post

The are 2 phases to what we commonly call a SQL Server databae restore:

  1. Restore the data to the disk
  2. Recover the database

The first phase is the one we normally think of, writing the SQL Server data back onto the disks.

During the recovery phase SQL Server ensures that everything in the database is consistent and that all transactions that were open at the point the database is restored to are rolled back.

All of the previous Restore-DbaDatabase examples in this series haveve been restoring the database, and then recovering it as well so it’s been hidden inside the function.

However there are a couple of scenarios when you don’t want the recovery phase to fire.

The first is when you are going to want to restore more backups to bring the database further forward in time. This might be in a disaster scenario where you’ve lost some of your backup storage. You could restore what you have while your Storage team are pulling the missing backups from tape.

The second is when you’re not sure when you want to stop the restore. So you restore to a specific point in time, put the database into standby mode, and then if needed you apply more backups.

Day 13 – Restore-DbaDatabase and Point in Time Restores – 31 days of dbatools backup and restores

Restore-DbaDatabase and Point in Time restores

So far in this series, we’ve just been restoring to the latest point in time that is in the backups. But often you want to restore to a particular point in time. Maybe just before someone ran that TRUNCATE they shouldn’t have or because you need to look into an error in more detail

So here’s a little secret, if you dig into the Restore-DbaDatabase source code you’ll we always do a Point in Time restore, Just that the default point in time is 1 year ahead. This helps to keep the logic cleaner throughout the Restore-DbaDatabase pipeline

But we can do a bit more than that. Restore-DbaDatabase offers 2 types of Point in Time restores:

  • Restore to a specific Point in Time
  • Restore to a specific marked Transaction

These 2 options cover almost any reason why you’d want to restore to somewhere other than the end of the backup chain. Examples are below the fold

Day 12 – Moving and renaming files with Restore-DbaDatabase – 31 days of dbatools backup and restores

Moving and Renaming files with Restore-DbaDatabase

Yesterday we looked at renaming databases while restoring them. The observant may have wondered how we can avoid filename conflicts as well. We’ll be covering that today

There are quiet a number of reasons why you’d want to change filenames or paths for data file while you’re restoring them:

  • Avoid a filename conflict
  • Original FilePaths don’t exist on target server
  • Different disk layouts
  • Just because you’ve never liked the naming schema

We’ll be going through a lot of examples today so this will be quite a long post.

One caveat. Due to SQL Server’s restore not supporting it we can only change the Physical name of the data file during the restore. To change the logical name you need to use the Rename-DbaDatabse function after the restore has completed.

Let’s look at some examples:

Page 1 of 5

Powered by WordPress & Theme by Anders Norén