SQL Relay bringing free SQL Server Training to Nottingham

sqlrelaylogo70Having spoken at the last couple of SQL Relays, I’m very happy to announce that in 2015 we’re visiting Nottingham for the first time.

On the 7th October SQL Relay will be at the Nottingham Albert Hall conference center.  Speaker submissions are now coming in (and if you fancy speaking, please submit a session), and we hope to release the full lineup early in August. Based on previous years there will a wide range of high quality presenters.

At Nottingham we will be having 4 tracks on the day:

  • SQL Server
  • Business Intelligence
  • Business Analytics
  • Workshops provided by our sponsors

Across these we’ll be covering topics such as SQL Server performance, management and development, Azure/Cloud technology, R, Big Data, Cubes, Reporting , Dashboarding and much more.

If you’re a SQL Server DBA, then this could be a great opportunity to pick up some Business Intelligence crossover training. Or for a Business Intelligence developer, find our how to use Big Data solutions (on Azure, without the hardware costs) to analyse your data even further.

All of this great SQL Server content is available for the princely sum of Nothing. That’s right, its free, and we’ll even provide you with lunch.

We’re open for registrations, so sign up today and look forward to some quality free SQL training in Nottingham

If you’ve any questions, then please ask them.

Finding the bottom folders of a file tree using PowerShell

There’s a simple way of getting Powershell to tell you the last folder in a known path:

$path = \\server\share\folder1\folder2\folder3
$folder = Split-Path $path -Leaf

and that’s all you need.

Now, how do you go about doing that if you don’t know the path ahead of time? That is, you have a large collection of directories, and all you want are those folders than make up the bottom levels of the branches. In this graphic it would be all the folders underlined.

Example of bottom folders

As an example, current $EMPLOYER holds backups from all SQL databases on a share, with each database’s backups being in a folder. Due to various DBAs over the years the depth of the tree to the actual backups isn’t fixed. So I need to find the “bottom” folder so I can process the backup files in there.

Well, that’s a little trickier. So lets have a look at a couple of ways of doing that:
Continue reading

Bulk adding Active Directory service accounts with PowerShell

I’ve recently had to create a lot of new service accounts to support some new infrastructure projects. As these are supporting a lot of web services, there’s been a large number of service accounts that needed creating to own all the app pools.

To make life simpler when this moves from implementation to support, all of these accounts want to be setup the same way. And we want secure passwords as well.

And then those accounts also need to be setup for each of our prod, stage, dev, test environments.

So, a dull, repetitive task where making sure details are replicated exactly, this sounds like the perfect job to automate with some PowerShell.

[Reflection.Assembly]::LoadFile('C:\Windows\Microsoft.NET\Framework\v4.0.30319\system.web.dll') | out-null

function New-Password ($length = 12)
        $p = [System.Web.Security.Membership]::GeneratePassword($length,2)
        $password = $p | ConvertTo-SecureString -AsPlainText -Force
        $retpass = @{"string"=$p; "secstring"=$password}
        return $retpass

$environments = @()

$environments += @{"env"="prod";
                   "adpath"= "ou=Prod Accounts, ou=Service Accounts, ou=Domain Controllers, dc=ad, dc=contoso,dc=com,";

$environments += @{"env"="test";
                    "adpath"= "ou=Test Accounts, ou=Service Accounts, ou=Domain Controllers, dc=ad, dc=contoso,dc=com,";

$environments += @{"env"="dev";
                    "adpath"= "ou=Dev Accounts, ou=Service Accounts, ou=Domain Controllers, dc=ad, dc=contoso,dc=com,";

$Users = import-csv -path "c:\temp\users.csv"

$output = @()

foreach ($environment in $environments){
    $out += "`n "+$environment.env + "`n"
    foreach ($user in $Users){
        $password = New-Password( 15)
        $username = $environment.prefix+$user.user.trim()
        Aew-ADuser $username -Path $environment.adpath -AccountPassword $password.secstring -DisplayName $username -PasswordNeverExpires $true -ChangePasswordAtLogon $false -CannotChangePassword $true -Enabled $true
        Add-ADGroupMember $identgroup -Members $username
        Add-ADGroupMember $rogroup -Members $username
        $output += @{"username"=$username; "password"=$password.string;"Environment"=$environment.env

First off, we’ll want to be creating secure complex passwords. Rather than rewrite everything from scratch, we’ll use the built in function from the .Net System.Web class, which we load in the first line.

Then we’ll wrap this call in a PowerShell function to make calling things easier:

function New-Password ($length = 12)
        $p = [System.Web.Security.Membership]::GeneratePassword($length,2)
        $password = $p | ConvertTo-SecureString -AsPlainText -Force
        $retpass = @{"string"=$p; "secstring"=$password}
        return $retpass

We pass in a length parameter for later use, but set a default to guarantee at least 12 characters. Then we create a new password of required length, the 2nd paramater to GeneratePassword specifies the minimum number of special characters (@!# etc) that the password has to contain. GeneratePassword returns the password as a string, but we need it to be a SecureString to pass to an Active Directory cmdlet later on, so we convert it. Then return the password both as a string and a secure string via a Hash Table.

Now we setup our environments we’ll be creating our accounts for. I like to do this using arrarys and hashtables. We have an array/object $Environments which holds the details of all of the environments we want to create. For my needs, each environment is defined as follows:

$environments += @{"env"="dev";
                    "adpath"= "ou=Dev Accounts, ou=Service Accounts, ou=Domain Controllers, dc=ad, dc=contoso,dc=com,";

The values in the hash table are:

  • The environment Name
  • The prefix for the usernames for this environment
  • The path in Active Directory for the OU the user account should be created in
  • An group for all the accounts in this environment to belong to
  • Another group for all the accounts in this environment to belong to

This is based on the AD setup I’m currently working with, your needs may be different, if you’re not the Active Directory admin then please go talk to them about how they’d like things configured

Now that we’ve got everything setup, let’s create some accounts!

I load the usernames from a csv file that looks like this:


The first line stays at user, so that we can use it to get the values later on.

Create an array $output to hold our output.

Now we loop through each of our environments, and then for each environment we loop through out list of users.

For each user:

  • we create a new secure password using our previously defined function.
  • Build a full username using the supplied username and the appropriate prefix for the environment
  • Create the user in Active Directory based on our environment
  • Add them to any groups we want them in
  • Add the newly created users details to our output object

And we’re done. In the great tradition of PowerShell I’ve kept the user details in an object $output so they can easily be passed along to another script, command or whatever you want.

If you want to just dump them to screen, then you can use:

$output | Format-Table 

Or redirect the object to you secure password storage utility of choice.

And another dull repetitive task is handed off to a simple script.

Upcoming Presentations

Well, it’s been a while since I last did some presenting, mainly due to having purchased a new home and having gotten stuck in with renovating it.

After a successful session up at SQL Server User Group North East last week talking about using PowerShell to migrate SQL Server instances, I’ve another 3 sessions lined up around the country. 2 as part of the SQLRelay set of events, and 1 one at the Leicester SQL User Group

All 3 sessions will be my presentation “Transaction Log Basics”, this abstract should give you a good idea of what to expect:

Every SQL DBA has at least 5 transaction logs that they need to take care of. But what exactly is going on inside that file? Why can it be a performance issue? Is it really that important? And why does it keep filling my disks when I’m not looking?

This session will answer all of those questions. We’ll be looking at how SQL Server relies on it’s transaction, best practices that will keep it performing properly and explaining why it’s so vital to your databases and your job to keep is safe!

It’s a perfect introduction to SQL Server transaction logs and also a good way to make sure you won’t fall into problems later on.

The dates are:

SQL Relay Southhampton – 14th October
Leicester SQL Server User Group – 15th October
SQL Relay Birmingham – 29th October (unfortunately this event is fully subscribed, but there is a waiting list you can sign up for)

Looking forward to seeing plenty of people at these events. If you’re reading this and planning on attending, then let me know, or come and say hello on the day.

Deleting Large amounts of data in SQL Server – Part 1

There comes a time when you’ll be asked to remove a large amount of data from one of your SQL Server databases. Perhaps an archive of order information is needs pruning, or removing those session records that aren’t needed anymore.

Yu’ve been off to have a look at, and it all looks simple enough. Looks like there’s a couple of million rows to delete, but you can identify them all easily. How easy is this going to be you think:

DELETE * FROM Table1 where DateAdded<getdate()-90;

And then it just sits there, and sits there, and sits there. Or you get a nice error message about breaking a key constraint. What to do about it?

Continue reading

Adding SQL Server jobs using PowerShell

PowerShellA sudden requirement for a new SQL Server agent job to be put on to every instance in our Enterprise was hardly greeted with joy. At least by those who hadn’t been playing with PowerShell.

Just like everything else in SQL Server, we can automate SMO to achieve our ends. And once we’ve got it working on one SQL Sever instance, it’s simple to extend this to hundreds.
Continue reading

SQL Saturday Exeter 2014 Redux

sqlsat269_webAnother great event from the SQL SouthWest team this year. Great atmosphere at the pre event Surf Part on the Friday night, catching up with old friends and making some new ones. Those who had a go on the mechanical surfboard seemed to enjoy themselves, unfortunately my knee injury meant I couldn’t have a go.

As I’ve mentioned before I was given the opportunity to try something a little bit different. So I was presenting a double session on “PowerShell for the curious DBA”, which I’d targetted as a short introduction to PowerShell for SQL Server DBAs who’d never really met it before.  I’ve uploaded the deck and scripts here if you’d like a copy.

Feedback on the session and the format from the attendees was good, and I’m looking forward to seeing the ‘official’ feedback from the orgs as well. I though the format of the session was great, allowed a topic to be covered in more depth, but wasn’t such a long session that you were ‘comitted’ to missing lots of other sessions.

Talking of other sessions I went along to the following:

Hugo Kornelis (b | t) – Good session on using Windowing functions to work around some more complex tasks in T-SQL, definitely something you want in your toolbox.

Kevan Riley(t) – Exploring some of the darker parts of SQL Server explain plans. Turns out it’s well worth checking the underlying XML as well as the nice graphical views.

Kevin Chant (t) – Certainly looks like anyone wanting to generate proper management reports will want to be checking out SharePoint and PowerView

Allan Mitchell (t) – Finally a proper explanation of how Hadoop hangs together, much more information than lots of reading around on t’tinternet.

Whittling down to that selection from the available sessions (here) was quite a task as well, lots of quality content was on offer so it could be a hard task picking which session to go to.

Unfortunately I couldn’t stay for the post event Curry as I had a 4 hours drive back to Nottingham, but the photos make it look like a lot of fun, so already planning a 2nd night’s stay if there’s a 2015 event.

Installing CRM 2011 on Windows 2012, error parsing Config file

Just been banging my head repeatedly with this one, and have finally found the cause of all the heartache.

As has been written in many many places, to install Dynamics CRM 2011 on a Windows 2012 Server, you need to ‘slipstream’ in RollUp 13. So, after reading one of the many blogs online about how to do this, you’ve got the following example config.xml all ready to go:

<Patch update=”true”>C:\RU13\Server\server_kb2791312_amd64_1033.msp</Patch>

Excellent you think as you run SetupServer.exe

And then up pops the handy error:

CRM Config file parse error

You dutifully trek off to look at the install logs, where there’s the helpful error message:

Error| Error parsing config file(HRESULT = 0xC00CE502) (configfileparse.h:CConfigFileParser::Parse:435).

Which is as useful as a chocolate teapot. I tried all the suggestions that I could find. Searching for this error across the web throws up lots of suggestions. Randomly adding Administrator rights, changes to the paths of files, and various other goodies. In this particular case there’s only one change that needs to be made to the config file:

<Patch update="true">C:\RU13\Server\server_kb2791312_amd64_1033.msp</Patch>

Can you spot it? The change is to the "‘s in Patch tag. In a number of examples online (and my first one) these are ‘s. Just to make that clearer:

"‘s work. That’s the ascii character returned by select char(34)

‘s don’t work. That’s the ascii character returned by select char(148)

Hopefully this will help save a some people’s foreheads, and some companies from dents in their desks.

Error: 18456, Login failed for user, Reason: Token-based server access – A possible cause

One of our developers came across with a problem they were having with a new app they’d put live on IIS. They were getting a lovely 500 error back about SQL Server connection issues.

So, as usual my first port of call when dealing with connection issues is to have a look through the SQL Server error log. And, once again it turned up the error:

2014-02-20 10:56:29.210	Logon	Error: 18456, Severity: 14, State: 11.
2014-02-20 10:56:29.210	Logon	Login failed for user 'Domain\Acme-test-web-1$'. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors. [CLIENT:]

Looks promising and simple doesn’t it. The $ on the end of the Active Directory object name tells us that this is a Machine account. So we have 2 options here, either the server itself is trying to connect, or it’s impersonating another user for the connection (The Kerberos Double Hop issue).

A quick look through the logins on this SQL Server showed that Domain\Acme-test-web-1$ had a valid login, and also permissions to the database it was trying to connect to.

So now, I moved on to looking at any errors generated in the SQL Server Ring buffers. For this I use this query:

SELECT CONVERT (varchar(30), GETDATE(), 121) as runtime,
dateadd (ms, (a.[Record Time] - sys.ms_ticks), GETDATE()) as [Notification_Time],
a.* , sys.ms_ticks AS [Current Time]
x.value('(//Record/Error/ErrorCode)[1]', 'varchar(30)') AS [ErrorCode],
x.value('(//Record/Error/CallingAPIName)[1]', 'varchar(255)') AS [CallingAPIName],
x.value('(//Record/Error/APIName)[1]', 'varchar(255)') AS [APIName],
x.value('(//Record/Error/SPID)[1]', 'int') AS [SPID],
x.value('(//Record/@id)[1]', 'bigint') AS [Record Id],
x.value('(//Record/@type)[1]', 'varchar(30)') AS [Type],
x.value('(//Record/@time)[1]', 'bigint') AS [Record Time]
FROM (SELECT CAST (record as xml) FROM sys.dm_os_ring_buffers
WHERE ring_buffer_type = 'RING_BUFFER_SECURITY_ERROR') AS R(x)) a
CROSS JOIN sys.dm_os_sys_info sys
ORDER BY a.[Record Time] ASC

Which I picked up years ago from this MSDN blog post Troubleshooting specific Login Failed error messages

Which returned the following truncated results (also reformatted to fit better):

runtime	                Notification_Time	ErrorCode    CallingAPIName	        	
2014-02-20 15:30:47.807	2014-02-20 10:56:03.147	0x534	     LookupAccountSidInternal	
APIName	              SPID
LookupAccountSid      61
RecordID    Type                        Record Time     Current Time
69	      RING_BUFFER_SECURITY_ERROR	637951566	649096225

Which pointed to a login not being found. As we new the machine account had a working login, we went back to look at the IIS box that was doing the calling. A quick peruse of the Application Pools showed an obvious cause. One of the app pools had been left running as “Application Identity”. This is a built in server level account which is there to allow people to quickly setup application pools. But being a server account it has no permissions off of the box. So when it does request access to a remote resource it delegates via the machine account. So, in our case, this application pool was trying to connect to the SQL Server by delegation, so while the login reported as failing was fine, it was only acting on behalf of an account that didn’t have a valid login.

A new service account was requested, and once live the application pool identity was changed over to that. And the problem disappeared.

Having had a quick BinGle around shows plenty of forum posts and others talkig about this issue, but I didn’t see this specific situation mentioned. I also saw a number of ‘solutions’ talking about adding random accounts to the servers Administrators group, which is almost NEVER the correct solution for a SQL Server login problem

Mission Critical SQL Server with Allan Hirt

Like a lot of SQL Server DBAs I’ve been working with the various SQL Server High Availability and Disaster Recovery tools over the years. Employers and Clients have always been happy with the results, but you always want to have some confirmation that you’re working to best practices.

So I was very happy when I saw the Technitrain were offering a 4 day course with Allan Hirt (b|t) of SQLHA on “Mission Critical SQL Server“. Having had Allan’s last books on my shelf for years (Pro SQL Server 2005 High Availability and
Pro SQL Server 2008 Failover Clustering) and already stumped up for his new E-book (Details and information here, and buy it here) I was well aware of his knowledge with SQL Server failover and knew the course would cover a fair amount of information.

The course was run at Skills Matter in North London, which meant a fair amount of reasonable priced accommodation nearby for those coming in from out of town. And for caffeine fiends like me, there’s plenty of GOOD coffee available nearby for an early morning wake up. The training area was roomy and well laid out. We had lunch brought in each day which saved time having to forage, varied choice each day with plenty of options for all diets. Free tea and coffee on tap in the breakout area, with a good selection of biscuits, fruit and Tunnocks Tea Cakes on offer as well.

At the start of Day 1 Allan informed us that today would most likely involve no labs! Thankfully this didn’t mean death by PowerPoint as it would have done with various other training bodies. Slides were kept to a minimum with lots of discussion of HA topics that a lot of in the trenches DBAs don’t really consider such as :

  • Why does the Business want this?
  • Do they appreciate the challenge
  • Do their apps even support this?
  • Just how much difference there is between 3 and 5 9’s of uptime
  • Planning for patching
  • Keep it simple

And the usual DBA foe, Documentation. What’s the point of a HA solution if only one person knows how to maintain it, just as much of a single point of failure as only having one network path to your storage!

We then moved on to the basics of Windows clustering. For some SQL Server DBAs the amount of time that Allan spends at the Windows level may be a suprise, but as he explains Windows Server Failover Cluster (WSFC) is the basic foundation of SQL Server HA technologies (Failover Clustering and Availability Groups). This means you need to understand it, and also make sure that any Server colleagues know how much you’re going to be relying on it, so when you ask for something specific there’s going to be a GOOD reason.

Day 2 rolled around, and with it Labs. I really liked the way labs worked on this course. We each had our own virtual environment hosted by Terillian’s LabOnDemand service which we could access from our own laptops via a web browser. As a MacBook Pro user I was very happy that they offered HTML5, Flash and Silverlight versions of their application, rather than relying on ActiveX. No one on the course had any problems apart from one overly restricted corporate laptop. By having remote access to the apps from our own kit this meant we could continue working on labs after course hours, or go back to revise them. Allan is currently exploring options to allow attendees to be able to take out a subscription to keep the environments after the course ends, which will be great for people without dedicated test infrastructures or who want to really tear things apart without worrying.

The first lab was configuring a new machine as a 2nd node, then building the Windows Cluster, adding a SQL Server instance to both nodes, then exploring patching the instance and adding new storage. 3 sets of instructions were provided for us:

  • Beginner – Every step was laid out telling you how to complete the task, so a good walkthrough for someone who’d never built a Failover Cluster before
  • Intermediate – Less information provided, so require a bit more knowledge than the beginner track, but as both tracks used the same virtual environment if you became stuck you could flick between the 2 to try and solve your issues
  • Advanced – Basic configuration information provided, and the 2 nodes are running Windows Server Core (so no GUI, everything via PowerShell). This was a separate virtual environment to the above 2 tracks

I first went through doing a mixture from Beginner and Intermediate to make sure I was picking up the best practice and any hints on things I might have been missing in my own builds. Then back at the hotel room that evening I went through the Advanced lab as well. This really was one of the strengths of the course, you never felt you had to blitz through the labs ad feel like you’d not gotten the best out of them, and if you weren’t sure then running through them a second time gave you a chance to clarify any points.

The labs were interspersed with more sessions of slides and Allan talking. These would normally coincide with progress through the labs. So once we’d all pretty much got our WSFCs built then we’d begin looking at SQL Server Failover Cluster Instances.

There was a lot of class interaction as we had a wide range of DBAs represented. From those looking to implement their first clusters, the charity sector, consultants with a wide range of clients and those looking to implement complex solutions for large Financial companies.  This meant we often went ‘off track’, but we’d always learn something new or be made to think outside of our own little boxes.

Day 4 we started to look at the new features that came in with SQL Server 2012, Availability Groups. This lead to a few myths being dispelled as attendees weren’t as used to this as previous topics. We covered the differences between  these new technologies and their predecessor Database Mirroring. And also covered complex situations using AGs to replicate data across multi site Failover Cluster Instances.

Wrapping up we all walked away with copies of all the slides from the 4 days, a subscription to Allan’s new book and plenty of knowledge to put to use.  I also walked away with a license for a copy of SQL Sentry’s Power Suite which was a nice bonus.

Chris and Allan seem quite keen on the idea to run this course again next year, and I’d certainly recommend it to anyone who wants to understand the various SQL Server High Availability options and the processes needed to back up the purely technical side