Using PowerShell to scratch another itch. This time though it wasn’t a work or SQL Server related itch for once. I’m a keen cyclist and like to keep track of all my rides, so I’ve been experimenting with various online ride trackers. And I’ve settled on Strava as I can have cool little online badges like this:
And race myself on my own segments.
The only problem was transferring a couple of years worth of data across from other sites or off of my hard drive. Strava has a handy multiple file uploader, but it only allows you to upload up to 25MB or 25 files at a time. To streamline this I wanted to run through all my old files and put them into folders so that each folder contained less than 25MB of data or less than 25 files. Enter PowerShell:
#folder containing files to be "chunked"
$file_to_process = Get-ChildItem F:\strava-tmp
#Hold the current total size of files
$current_total_size = 0
#Set the maximum total size of files for one folder (in this case 24MB)
$max_size = 24*1024*1024
#Set maximum number of files in a folder
$max_count = 25
#Folder numbering count
$current_folder_index = 1
#Count of files in folder
$current_file_count = 0
#Base folder for output folders
$output_base = "f:\strava2"
new-item "$output_base\folder$current_folder_index" -ItemType directory
$fullout = "$output_base\folder$current_folder_index"
foreach ($file in $file_to_process){
$tmp_size = $file.Length + $current_total_size
if (($tmp_size -lt $max_size) -and ($current_file_count -lt $max_count)){
copy-item $file.FullName -destination $fullout
$current_total_size = $current_total_size + $file.Length
$current_file_count++
}else{
$current_folder_index++
new-item "$output_base\folder$current_folder_index" -ItemType directory
$fullout = "$output_base\folder$current_folder_index"
$current_total_size= $file.length
copy-item $file.FullName -destination $fullout
$current_file_count = 1
}
}
I’d have then liked to have used <code>Invoke-WebRequest</code> to do the actual uploading for me. But it appears that Strava’s v3 API is invite only, which isn’t that useful really.
This script also comes in handy for splitting files up for emailing if you have a maximum attachment limit on your account.
I seem to have a spate of 3rd party applications finally moving from SQL Server 2000 to a version that’s a lot newer (SQL Server 2012). This means a lot of database migration, and as they want to keep the applications up while they test the new versions this means using backup and restore.
As Microsoft is only supporting restoring backups from the previous 3 versions (SQL Server 2005, 2008 and 2008R2) with SQL Server 2012 this the process actually has to go like this:
Backup the SQL Sever 2000 database
Restore the database onto an instance of SQL Sever 2005, 2008 or 2008R2
“Upgrade” database by setting Compatibility Level to new Server version
Backup new database
Restore database onto SQL Server 2012 instance
“Upgrade” database by setting Compatibility Level to new Server version
Which to me is a lot of manual handling for quite a lot of databases. And did I mention that steps 2 and 5 will probably also mean relocating data and log files (potentially multiples of each), and some full text indexes as well for good measure? All of which makes this an unappetising prospect to do manually.
So time for some PowerShell automation. First off we import our good friend the SQLPS module, and then define 2 simple functions:
The eagle eyed amongst you will note Set-Location c:\ just after the import-module. That’s because the last thing SQLPS does is Set-Location sqlserver:\ to move to the SQLSERVER drive, which will make some of the later file copying much trickier, so we just make sure we’re back on a file drive.
First function Database-Restore takes the following parameters:
SQLServer Name – string of Server Name (and Instance Name if it’s a Named Instance)
DatabaseName – string of your Database Name
BackupFilePath – string of where the backup you want to restore is held
RestorePath – the path you want to move restored files to
Upgrade – optional boolean parameter to determine whether to upgrade the restored database to the server’s current compatibilitylevel
Nothing too unusual in the function, going through it quickly we perform the following operations:
Create a connection to SQL Server
Create a restore object, set it’s database, build a restore device and add our backup file to it. We set the option not to overwrite an existing database, just in case…..
If we’ve passed in a folder to move them to, then Read a list of files from the backup file, then loop through it creating a File Relocation object for every one of them, and adding them to our Restore object
Perform the actual restore (line 31)
Then if we’ve set Upgrade to TRUE we set the compatibility level equal to that of Model, and then call AlterDB() to write it back
The second function Database-Backup takes the following parameters:
SQLServer Name – string of Server Name (and Instance Name if it’s a Named Instance)
DatabaseName – string of your Database Name
BackupFilePath – string of where to put the backup.
So how are these going to help us migrate our databases? Simple, we can call them repeatedly and leave them running while we get on with something more interesting. If you’re lucky enough to have a shared backup drive that all your SQL Server Instances can read and write to then the upgrade is as easy as:
When placing the files remember that the script is running as the Windows account calling it, but the SQL Sever backups and restores will be running as the Database Engine Account.
Now I can happily migrate those databases to a happier place repeatedly without having to take time out from other larger projects. Win for me, and win for the people I’m working for.
Hi, if you’ve just landed here you might be interested in the series of posts >31 days of SQL Server Backup and Restore with PowerShell where I’ll be providing more information about the concepts and scripts in this presentation
Includes the presentation as a pdf, all the PowerShell scripts used during the presentation, the SQL scripts to build the demo databases, plus the backups of a couple of the databases to speed things up.
Readme.txt gives a quick overview of each script and the order to run through them in.
Just a quick heads up for anyone in Nottingham or elsewhere in the Midlands wanting some high quality introductory SQL Server. The SQL Midlands SQL Server User group has 2 great speakers on the 22nd August. John Martin (twitter) will be presenting a Beginners Guide to SQL Server and Alex Whittles (blog|twitter) will be presenting a Beginners Guide to Business Intelligence.
The User Group is in Birmingham, but it’s a very simple trip down via train, with only a short walk from New Street to get to the venue. So well worth making the effort to get down for.
If you’re interested, register here to make sure there’s plenty of pizza.