Musings of a Data professional

Stuart Moore

Month: February 2014

Installing CRM 2011 on Windows 2012, error parsing Config file

Just been banging my head repeatedly with this one, and have finally found the cause of all the heartache.

As has been written in many many places, to install Dynamics CRM 2011 on a Windows 2012 Server, you need to ‘slipstream’ in RollUp 13. So, after reading one of the many blogs online about how to do this, you’ve got the following example config.xml all ready to go:

<CRMSetup>
<Server>
<Patch update=”true”>C:\RU13\Server\server_kb2791312_amd64_1033.msp</Patch>
</Server>
</CRMSetup>

Excellent you think as you run SetupServer.exe

And then up pops the handy error:

CRM Config file parse error

You dutifully trek off to look at the install logs, where there’s the helpful error message:

Error| Error parsing config file(HRESULT = 0xC00CE502) (configfileparse.h:CConfigFileParser::Parse:435).

Which is as useful as a chocolate teapot. I tried all the suggestions that I could find. Searching for this error across the web throws up lots of suggestions. Randomly adding Administrator rights, changes to the paths of files, and various other goodies. In this particular case there’s only one change that needs to be made to the config file:

<CRMSetup>
<Server>
<Patch update="true">C:\RU13\Server\server_kb2791312_amd64_1033.msp</Patch>
</Server>
</CRMSetup>

Can you spot it? The change is to the "‘s in Patch tag. In a number of examples online (and my first one) these are ‘s. Just to make that clearer:

"‘s work. That’s the ascii character returned by select char(34)

‘s don’t work. That’s the ascii character returned by select char(148)

Hopefully this will help save a some people’s foreheads, and some companies from dents in their desks.

Error: 18456, Login failed for user, Reason: Token-based server access – A possible cause

One of our developers came across with a problem they were having with a new app they’d put live on IIS. They were getting a lovely 500 error back about SQL Server connection issues.

So, as usual my first port of call when dealing with connection issues is to have a look through the SQL Server error log. And, once again it turned up the error:

2014-02-20 10:56:29.210	Logon	Error: 18456, Severity: 14, State: 11.
2014-02-20 10:56:29.210	Logon	Login failed for user 'Domain\Acme-test-web-1$'. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors. [CLIENT: 251.148.71.16]

Looks promising and simple doesn’t it. The $ on the end of the Active Directory object name tells us that this is a Machine account. So we have 2 options here, either the server itself is trying to connect, or it’s impersonating another user for the connection (The Kerberos Double Hop issue).

A quick look through the logins on this SQL Server showed that Domain\Acme-test-web-1$ had a valid login, and also permissions to the database it was trying to connect to.

So now, I moved on to looking at any errors generated in the SQL Server Ring buffers. For this I use this query:

SELECT CONVERT (varchar(30), GETDATE(), 121) as runtime,
dateadd (ms, (a.[Record Time] - sys.ms_ticks), GETDATE()) as [Notification_Time],
a.* , sys.ms_ticks AS [Current Time]
FROM
(SELECT
x.value('(//Record/Error/ErrorCode)[1]', 'varchar(30)') AS [ErrorCode],
x.value('(//Record/Error/CallingAPIName)[1]', 'varchar(255)') AS [CallingAPIName],
x.value('(//Record/Error/APIName)[1]', 'varchar(255)') AS [APIName],
x.value('(//Record/Error/SPID)[1]', 'int') AS [SPID],
x.value('(//Record/@id)[1]', 'bigint') AS [Record Id],
x.value('(//Record/@type)[1]', 'varchar(30)') AS [Type],
x.value('(//Record/@time)[1]', 'bigint') AS [Record Time]
FROM (SELECT CAST (record as xml) FROM sys.dm_os_ring_buffers
WHERE ring_buffer_type = 'RING_BUFFER_SECURITY_ERROR') AS R(x)) a
CROSS JOIN sys.dm_os_sys_info sys
ORDER BY a.[Record Time] ASC

Which I picked up years ago from this MSDN blog post Troubleshooting specific Login Failed error messages

Which returned the following truncated results (also reformatted to fit better):

runtime	                Notification_Time	ErrorCode    CallingAPIName	        	
2014-02-20 15:30:47.807	2014-02-20 10:56:03.147	0x534	     LookupAccountSidInternal	
APIName	              SPID
LookupAccountSid      61
RecordID    Type                        Record Time     Current Time
69	      RING_BUFFER_SECURITY_ERROR	637951566	649096225

Which pointed to a login not being found. As we new the machine account had a working login, we went back to look at the IIS box that was doing the calling. A quick peruse of the Application Pools showed an obvious cause. One of the app pools had been left running as “Application Identity”. This is a built in server level account which is there to allow people to quickly setup application pools. But being a server account it has no permissions off of the box. So when it does request access to a remote resource it delegates via the machine account. So, in our case, this application pool was trying to connect to the SQL Server by delegation, so while the login reported as failing was fine, it was only acting on behalf of an account that didn’t have a valid login.

A new service account was requested, and once live the application pool identity was changed over to that. And the problem disappeared.

Having had a quick BinGle around shows plenty of forum posts and others talkig about this issue, but I didn’t see this specific situation mentioned. I also saw a number of ‘solutions’ talking about adding random accounts to the servers Administrators group, which is almost NEVER the correct solution for a SQL Server login problem

Mission Critical SQL Server with Allan Hirt

Like a lot of SQL Server DBAs I’ve been working with the various SQL Server High Availability and Disaster Recovery tools over the years. Employers and Clients have always been happy with the results, but you always want to have some confirmation that you’re working to best practices.

So I was very happy when I saw the Technitrain were offering a 4 day course with Allan Hirt (b|t) of SQLHA on “Mission Critical SQL Server“. Having had Allan’s last books on my shelf for years (Pro SQL Server 2005 High Availability and
Pro SQL Server 2008 Failover Clustering) and already stumped up for his new E-book (Details and information here, and buy it here) I was well aware of his knowledge with SQL Server failover and knew the course would cover a fair amount of information.

The course was run at Skills Matter in North London, which meant a fair amount of reasonable priced accommodation nearby for those coming in from out of town. And for caffeine fiends like me, there’s plenty of GOOD coffee available nearby for an early morning wake up. The training area was roomy and well laid out. We had lunch brought in each day which saved time having to forage, varied choice each day with plenty of options for all diets. Free tea and coffee on tap in the breakout area, with a good selection of biscuits, fruit and Tunnocks Tea Cakes on offer as well.

At the start of Day 1 Allan informed us that today would most likely involve no labs! Thankfully this didn’t mean death by PowerPoint as it would have done with various other training bodies. Slides were kept to a minimum with lots of discussion of HA topics that a lot of in the trenches DBAs don’t really consider such as :

  • Why does the Business want this?
  • Do they appreciate the challenge
  • Do their apps even support this?
  • Just how much difference there is between 3 and 5 9’s of uptime
  • Planning for patching
  • Keep it simple

And the usual DBA foe, Documentation. What’s the point of a HA solution if only one person knows how to maintain it, just as much of a single point of failure as only having one network path to your storage!

We then moved on to the basics of Windows clustering. For some SQL Server DBAs the amount of time that Allan spends at the Windows level may be a suprise, but as he explains Windows Server Failover Cluster (WSFC) is the basic foundation of SQL Server HA technologies (Failover Clustering and Availability Groups). This means you need to understand it, and also make sure that any Server colleagues know how much you’re going to be relying on it, so when you ask for something specific there’s going to be a GOOD reason.

Day 2 rolled around, and with it Labs. I really liked the way labs worked on this course. We each had our own virtual environment hosted by Terillian’s LabOnDemand service which we could access from our own laptops via a web browser. As a MacBook Pro user I was very happy that they offered HTML5, Flash and Silverlight versions of their application, rather than relying on ActiveX. No one on the course had any problems apart from one overly restricted corporate laptop. By having remote access to the apps from our own kit this meant we could continue working on labs after course hours, or go back to revise them. Allan is currently exploring options to allow attendees to be able to take out a subscription to keep the environments after the course ends, which will be great for people without dedicated test infrastructures or who want to really tear things apart without worrying.

The first lab was configuring a new machine as a 2nd node, then building the Windows Cluster, adding a SQL Server instance to both nodes, then exploring patching the instance and adding new storage. 3 sets of instructions were provided for us:

  • Beginner – Every step was laid out telling you how to complete the task, so a good walkthrough for someone who’d never built a Failover Cluster before
  • Intermediate – Less information provided, so require a bit more knowledge than the beginner track, but as both tracks used the same virtual environment if you became stuck you could flick between the 2 to try and solve your issues
  • Advanced – Basic configuration information provided, and the 2 nodes are running Windows Server Core (so no GUI, everything via PowerShell). This was a separate virtual environment to the above 2 tracks

I first went through doing a mixture from Beginner and Intermediate to make sure I was picking up the best practice and any hints on things I might have been missing in my own builds. Then back at the hotel room that evening I went through the Advanced lab as well. This really was one of the strengths of the course, you never felt you had to blitz through the labs ad feel like you’d not gotten the best out of them, and if you weren’t sure then running through them a second time gave you a chance to clarify any points.

The labs were interspersed with more sessions of slides and Allan talking. These would normally coincide with progress through the labs. So once we’d all pretty much got our WSFCs built then we’d begin looking at SQL Server Failover Cluster Instances.

There was a lot of class interaction as we had a wide range of DBAs represented. From those looking to implement their first clusters, the charity sector, consultants with a wide range of clients and those looking to implement complex solutions for large Financial companies.  This meant we often went ‘off track’, but we’d always learn something new or be made to think outside of our own little boxes.

Day 4 we started to look at the new features that came in with SQL Server 2012, Availability Groups. This lead to a few myths being dispelled as attendees weren’t as used to this as previous topics. We covered the differences between  these new technologies and their predecessor Database Mirroring. And also covered complex situations using AGs to replicate data across multi site Failover Cluster Instances.

Wrapping up we all walked away with copies of all the slides from the 4 days, a subscription to Allan’s new book and plenty of knowledge to put to use.  I also walked away with a license for a copy of SQL Sentry’s Power Suite which was a nice bonus.

Chris and Allan seem quite keen on the idea to run this course again next year, and I’d certainly recommend it to anyone who wants to understand the various SQL Server High Availability options and the processes needed to back up the purely technical side

Get Cluster size for all disks and volumes on a Windows machine using powershell and wmi

Having been building some new SQL Server boxes recently I wanted to check that the volumes have been formatted with the appropriate block size. (64k as per this document – http://msdn.microsoft.com/en-us/library/dd758814.aspx)

I’m lucky, and have nice storage admins who’ll give me what I ask for, but it’s always better to check rather than assume. Can be a bit tricky if you discover they’re wrong after you’ve put the data on the disks.

Most examples on the interwebs appeared to rely on Get-PSDrive or fsutil. These are great little tools, but don’t play nicely with mount points, and since we use a lot of mount points they were out.

So I came up the with this quick snippet to quickly list all volumes, be they drive, mountpoint or unattached:

$wql = "SELECT Label, Blocksize, Name FROM Win32_Volume WHERE FileSystem='NTFS'"
Get-WmiObject -Query $wql -ComputerName '.' | Select-Object Label, Blocksize, Name

And now I’ve got a nice table listing everything owned by the box, with enough information to work out which volume, drive or mountpoint needs a bit of attention

Label                                      Blocksize Name                               
-----                                      --------- ----                               
SQL-Web-Mnt                                    65536 L:\                                
SQL-Web-Mnt-Prod                               65536 L:\data1\                          
SQL-Acct-Mnt                                   65536 M:\                                
SQL-Acct-Mnt-Prod                              65536 M:\prod\                           
SQL-Acct-Mnt-Rep                               65536 M:\reporting\                      
SQL-Acct-Mnt-Other                             65536 M:\data\       

Out of the 60 volumes I checked across various boxes, only 1 was incorrect. But fixing it now will save some time further down the line.

Powered by WordPress & Theme by Anders Norén