Just like everything else in SQL Server, we can automate SMO to achieve our ends. And once we’ve got it working on one SQL Sever instance, it’s simple to extend this to hundreds.
Just like everything else in SQL Server, we can automate SMO to achieve our ends. And once we’ve got it working on one SQL Sever instance, it’s simple to extend this to hundreds.
Another great event from the SQL SouthWest team this year. Great atmosphere at the pre event Surf Part on the Friday night, catching up with old friends and making some new ones. Those who had a go on the mechanical surfboard seemed to enjoy themselves, unfortunately my knee injury meant I couldn’t have a go.
As I’ve mentioned before I was given the opportunity to try something a little bit different. So I was presenting a double session on “PowerShell for the curious DBA”, which I’d targetted as a short introduction to PowerShell for SQL Server DBAs who’d never really met it before. I’ve uploaded the deck and scripts here if you’d like a copy.
Feedback on the session and the format from the attendees was good, and I’m looking forward to seeing the ‘official’ feedback from the orgs as well. I though the format of the session was great, allowed a topic to be covered in more depth, but wasn’t such a long session that you were ‘comitted’ to missing lots of other sessions.
Talking of other sessions I went along to the following:
Kevan Riley(t) – Exploring some of the darker parts of SQL Server explain plans. Turns out it’s well worth checking the underlying XML as well as the nice graphical views.
Kevin Chant (t) – Certainly looks like anyone wanting to generate proper management reports will want to be checking out SharePoint and PowerView
Allan Mitchell (t) – Finally a proper explanation of how Hadoop hangs together, much more information than lots of reading around on t’tinternet.
Whittling down to that selection from the available sessions (here) was quite a task as well, lots of quality content was on offer so it could be a hard task picking which session to go to.
Unfortunately I couldn’t stay for the post event Curry as I had a 4 hours drive back to Nottingham, but the photos make it look like a lot of fun, so already planning a 2nd night’s stay if there’s a 2015 event.
Just been banging my head repeatedly with this one, and have finally found the cause of all the heartache.
As has been written in many many places, to install Dynamics CRM 2011 on a Windows 2012 Server, you need to ‘slipstream’ in RollUp 13. So, after reading one of the many blogs online about how to do this, you’ve got the following example config.xml all ready to go:
<CRMSetup> <Server> <Patch update=”true”>C:\RU13\Server\server_kb2791312_amd64_1033.msp</Patch> </Server> </CRMSetup>
Excellent you think as you run SetupServer.exe
And then up pops the handy error:
You dutifully trek off to look at the install logs, where there’s the helpful error message:
Error| Error parsing config file(HRESULT = 0xC00CE502) (configfileparse.h:CConfigFileParser::Parse:435).
Which is as useful as a chocolate teapot. I tried all the suggestions that I could find. Searching for this error across the web throws up lots of suggestions. Randomly adding Administrator rights, changes to the paths of files, and various other goodies. In this particular case there’s only one change that needs to be made to the config file:
<CRMSetup> <Server> <Patch update="true">C:\RU13\Server\server_kb2791312_amd64_1033.msp</Patch> </Server> </CRMSetup>
Can you spot it? The change is to the
"‘s in Patch tag. In a number of examples online (and my first one) these are
”‘s. Just to make that clearer:
"‘s work. That’s the ascii character returned by
”‘s don’t work. That’s the ascii character returned by
Hopefully this will help save a some people’s foreheads, and some companies from dents in their desks.
One of our developers came across with a problem they were having with a new app they’d put live on IIS. They were getting a lovely 500 error back about SQL Server connection issues.
So, as usual my first port of call when dealing with connection issues is to have a look through the SQL Server error log. And, once again it turned up the error:
2014-02-20 10:56:29.210 Logon Error: 18456, Severity: 14, State: 11. 2014-02-20 10:56:29.210 Logon Login failed for user 'Domain\Acme-test-web-1$'. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors. [CLIENT: 251.148.71.16]
Looks promising and simple doesn’t it. The
$ on the end of the Active Directory object name tells us that this is a Machine account. So we have 2 options here, either the server itself is trying to connect, or it's impersonating another user for the connection (The Kerberos Double Hop issue).
A quick look through the logins on this SQL Server showed that
Domain\Acme-test-web-1$ had a valid login, and also permissions to the database it was trying to connect to.
So now, I moved on to looking at any errors generated in the SQL Server Ring buffers. For this I use this query:
SELECT CONVERT (varchar(30), GETDATE(), 121) as runtime, dateadd (ms, (a.[Record Time] - sys.ms_ticks), GETDATE()) as [Notification_Time], a.* , sys.ms_ticks AS [Current Time] FROM (SELECT x.value('(//Record/Error/ErrorCode)', 'varchar(30)') AS [ErrorCode], x.value('(//Record/Error/CallingAPIName)', 'varchar(255)') AS [CallingAPIName], x.value('(//Record/Error/APIName)', 'varchar(255)') AS [APIName], x.value('(//Record/Error/SPID)', 'int') AS [SPID], x.value('(//Record/@id)', 'bigint') AS [Record Id], x.value('(//Record/@type)', 'varchar(30)') AS [Type], x.value('(//Record/@time)', 'bigint') AS [Record Time] FROM (SELECT CAST (record as xml) FROM sys.dm_os_ring_buffers WHERE ring_buffer_type = 'RING_BUFFER_SECURITY_ERROR') AS R(x)) a CROSS JOIN sys.dm_os_sys_info sys ORDER BY a.[Record Time] ASC
Which I picked up years ago from this MSDN blog post Troubleshooting specific Login Failed error messages
Which returned the following truncated results (also reformatted to fit better):
runtime Notification_Time ErrorCode CallingAPIName 2014-02-20 15:30:47.807 2014-02-20 10:56:03.147 0x534 LookupAccountSidInternal APIName SPID LookupAccountSid 61 RecordID Type Record Time Current Time 69 RING_BUFFER_SECURITY_ERROR 637951566 649096225
Which pointed to a login not being found. As we new the machine account had a working login, we went back to look at the IIS box that was doing the calling. A quick peruse of the Application Pools showed an obvious cause. One of the app pools had been left running as "Application Identity". This is a built in server level account which is there to allow people to quickly setup application pools. But being a server account it has no permissions off of the box. So when it does request access to a remote resource it delegates via the machine account. So, in our case, this application pool was trying to connect to the SQL Server by delegation, so while the login reported as failing was fine, it was only acting on behalf of an account that didn't have a valid login.
A new service account was requested, and once live the application pool identity was changed over to that. And the problem disappeared.
Having had a quick BinGle around shows plenty of forum posts and others talkig about this issue, but I didn't see this specific situation mentioned. I also saw a number of 'solutions' talking about adding random accounts to the servers Administrators group, which is almost NEVER the correct solution for a SQL Server login problem
Like a lot of SQL Server DBAs I’ve been working with the various SQL Server High Availability and Disaster Recovery tools over the years. Employers and Clients have always been happy with the results, but you always want to have some confirmation that you’re working to best practices.
So I was very happy when I saw the Technitrain were offering a 4 day course with Allan Hirt (b|t) of SQLHA on “Mission Critical SQL Server“. Having had Allan’s last books on my shelf for years (Pro SQL Server 2005 High Availability and
Pro SQL Server 2008 Failover Clustering) and already stumped up for his new E-book (Details and information here, and buy it here) I was well aware of his knowledge with SQL Server failover and knew the course would cover a fair amount of information.
The course was run at Skills Matter in North London, which meant a fair amount of reasonable priced accommodation nearby for those coming in from out of town. And for caffeine fiends like me, there’s plenty of GOOD coffee available nearby for an early morning wake up. The training area was roomy and well laid out. We had lunch brought in each day which saved time having to forage, varied choice each day with plenty of options for all diets. Free tea and coffee on tap in the breakout area, with a good selection of biscuits, fruit and Tunnocks Tea Cakes on offer as well.
At the start of Day 1 Allan informed us that today would most likely involve no labs! Thankfully this didn’t mean death by PowerPoint as it would have done with various other training bodies. Slides were kept to a minimum with lots of discussion of HA topics that a lot of in the trenches DBAs don’t really consider such as :
- Why does the Business want this?
- Do they appreciate the challenge
- Do their apps even support this?
- Just how much difference there is between 3 and 5 9′s of uptime
- Planning for patching
- Keep it simple
And the usual DBA foe, Documentation. What’s the point of a HA solution if only one person knows how to maintain it, just as much of a single point of failure as only having one network path to your storage!
We then moved on to the basics of Windows clustering. For some SQL Server DBAs the amount of time that Allan spends at the Windows level may be a suprise, but as he explains Windows Server Failover Cluster (WSFC) is the basic foundation of SQL Server HA technologies (Failover Clustering and Availability Groups). This means you need to understand it, and also make sure that any Server colleagues know how much you’re going to be relying on it, so when you ask for something specific there’s going to be a GOOD reason.
Day 2 rolled around, and with it Labs. I really liked the way labs worked on this course. We each had our own virtual environment hosted by Terillian’s LabOnDemand service which we could access from our own laptops via a web browser. As a MacBook Pro user I was very happy that they offered HTML5, Flash and Silverlight versions of their application, rather than relying on ActiveX. No one on the course had any problems apart from one overly restricted corporate laptop. By having remote access to the apps from our own kit this meant we could continue working on labs after course hours, or go back to revise them. Allan is currently exploring options to allow attendees to be able to take out a subscription to keep the environments after the course ends, which will be great for people without dedicated test infrastructures or who want to really tear things apart without worrying.
The first lab was configuring a new machine as a 2nd node, then building the Windows Cluster, adding a SQL Server instance to both nodes, then exploring patching the instance and adding new storage. 3 sets of instructions were provided for us:
- Beginner – Every step was laid out telling you how to complete the task, so a good walkthrough for someone who’d never built a Failover Cluster before
- Intermediate – Less information provided, so require a bit more knowledge than the beginner track, but as both tracks used the same virtual environment if you became stuck you could flick between the 2 to try and solve your issues
- Advanced – Basic configuration information provided, and the 2 nodes are running Windows Server Core (so no GUI, everything via PowerShell). This was a separate virtual environment to the above 2 tracks
I first went through doing a mixture from Beginner and Intermediate to make sure I was picking up the best practice and any hints on things I might have been missing in my own builds. Then back at the hotel room that evening I went through the Advanced lab as well. This really was one of the strengths of the course, you never felt you had to blitz through the labs ad feel like you’d not gotten the best out of them, and if you weren’t sure then running through them a second time gave you a chance to clarify any points.
The labs were interspersed with more sessions of slides and Allan talking. These would normally coincide with progress through the labs. So once we’d all pretty much got our WSFCs built then we’d begin looking at SQL Server Failover Cluster Instances.
There was a lot of class interaction as we had a wide range of DBAs represented. From those looking to implement their first clusters, the charity sector, consultants with a wide range of clients and those looking to implement complex solutions for large Financial companies. This meant we often went ‘off track’, but we’d always learn something new or be made to think outside of our own little boxes.
Day 4 we started to look at the new features that came in with SQL Server 2012, Availability Groups. This lead to a few myths being dispelled as attendees weren’t as used to this as previous topics. We covered the differences between these new technologies and their predecessor Database Mirroring. And also covered complex situations using AGs to replicate data across multi site Failover Cluster Instances.
Wrapping up we all walked away with copies of all the slides from the 4 days, a subscription to Allan’s new book and plenty of knowledge to put to use. I also walked away with a license for a copy of SQL Sentry’s Power Suite which was a nice bonus.
Chris and Allan seem quite keen on the idea to run this course again next year, and I’d certainly recommend it to anyone who wants to understand the various SQL Server High Availability options and the processes needed to back up the purely technical side
Having been building some new SQL Server boxes recently I wanted to check that the volumes have been formatted with the appropriate block size. (64k as per this document - http://msdn.microsoft.com/en-us/library/dd758814.aspx)
I’m lucky, and have nice storage admins who’ll give me what I ask for, but it’s always better to check rather than assume. Can be a bit tricky if you discover they’re wrong after you’ve put the data on the disks.
Most examples on the interwebs appeared to rely on
fsutil. These are great little tools, but don’t play nicely with mount points, and since we use a lot of mount points they were out.
So I came up the with this quick snippet to quickly list all volumes, be they drive, mountpoint or unattached:
$wql = "SELECT Label, Blocksize, Name FROM Win32_Volume WHERE FileSystem='NTFS'" Get-WmiObject -Query $wql -ComputerName '.' | Select-Object Label, Blocksize, Name
And now I’ve got a nice table listing everything owned by the box, with enough information to work out which volume, drive or mountpoint needs a bit of attention
Label Blocksize Name ----- --------- ---- SQL-Web-Mnt 65536 L:\ SQL-Web-Mnt-Prod 65536 L:\data1\ SQL-Acct-Mnt 65536 M:\ SQL-Acct-Mnt-Prod 65536 M:\prod\ SQL-Acct-Mnt-Rep 65536 M:\reporting\ SQL-Acct-Mnt-Other 65536 M:\data\
Out of the 60 volumes I checked across various boxes, only 1 was incorrect. But fixing it now will save some time further down the line.
Doing something a bit different at SQL Saturday Exeter 2014: PowerShell for the curious SQL Server DBA
So, I’ve been offered the chance to present 2 consecutive sessions covering 1 topic, 100 minutes to fill with something new. I’ve decided that based on feedback and questions from my “Using PowerShell for Automating Backups and Restores with PowerShell” presentation I’ll be filling the 100 minutes with:
PowerShell for the curious SQL Server DBA
Every job has those mundane repetitive tasks than you HAVE to do, day in day out.
As SQL Server DBAs we have a particularly long list of things that we have to do to make sure our databases are working properly, and also to ensure that when they’re not we can fix them quickly enough that we’re not looking for a new job!
I spent a fair chunk of the Christmas break ripping yet more of my CD collection to mp3s so I can shove them on the home NAS, put the CDs in the loft and reclaim a load of space. Ripping them was a fairly predictably repetitive job of shoving CDs into the drive, waiting for the FreeDB lookup to run (or fail, which meant a bit of typing) and then hitting the go button.
Not the most fun of jobs, but still a lot less work that when I digitised some large parts of my vinyl collection a few years ago After a couple of days I had a nice collection of mp3s all sat there on a disk ready to be imported into the main collection folder.
Just one job remained, getting the Album art together. This is always a fun job of getting the right image. I’ve been using Album Art Downloader for this for a while, as you can set plenty of options to remove most of the effort. But, it doesn’t embed the images into the files. Which as an iDevice user I want so I can have the pictures transfer across when I sync. This got me thinking. I’ve got a windows file system with a nice hierarchical folder structure containing files, and I want to perform an action on each of the files in that structure.
Now if only there was some sort of Windows scripting language that was good at this sort of thing……..
Not resolutions, oh no! A resolution can be broken by one relapse, goals are something I can keep working on even if there’s a setback!
Just putting them here, as once they’re shared it’s very hard to ignore them.
SQL Server related ones:
1) Get more inventive with BI work, and get involved in some Big Data technologies
- Want to get involved with some BI work that’s pushing the envelope a bit. Either Volume, Velocity or Variety wise. Also want to get involved in a few projects that are using newer technologies/ideas to extract information from data.
2) Automate even more of my workload
- Mainly because I can, and because it will free up more time to pursue the other 9 goals in this post!
3) Get really down and dirty with SQL Server Clustering and AlwaysOn
- I want to get to a really solid position with the various SQL Server HA and Replication technologies. I can build them, and debug them, but there’s always a suspicion that there’ll be something I don’t know. So lots of building test environments and finding inventive ways to break them! Also signed up for training from the master, Allan Hirt (b|t), being put on by Technitrain
4) Present some open access training courses.
- I’ve run training in house courses and sessions for various companies, tutored a few people via email, and spoken at a number of SQL Server community events. Now I’m looking forward to offering some longer sessions to the general public. Discussions are underway about offering some day long SQL Server DBA PowerShell training in the Nottingham/East Midlands area, and possibly a day on Extended Events as well
5) Speak at more SQL community events, maybe overseas
- I’ve really enjoyed the SQL Community events I’ve attended and spoken at during 2013. So I’m keen to speak at more in 2014. I’ve been sounded out about trying something new at SQL Saturday Exeter, which I’m really looking forward to (but not sure if I can completely announce it yet!) And there’s a number of User Groups I’ve not spoken at (think I’ve pretty much promised to make it up to SQL North East at some point), and some I’d love to go back to. Also very tempted to submit a session for a non UK event.
And because it’s not all about work, here’s some Personal ones:
1) Keep on at my OU Maths Degree
- I’ve been working at getting a BSc in Mathematics for the last 4 years. So I’ll be continuing the studying this year, with a residential school to look forward to and also beginning on Level 3 modules, which should bring a change of pace
2) Once I find out about what I can do with my knee, target a 100km audax or a marathon
- Following an altercation with a section of Nottingham’s tram tracks I’m currently not cycling or running due to knee damage. Once the doc’s given me the prognosis I’ll make a decision about what sporting target to go for this year. Either another 1000km+ plus Audax ride, or getting round to running a full marathon
3) Read more fiction and non-technical non-fiction
- It’s not been a great year for non-technical or non-degree reading, so I want to turn that around in 2014 and read a bit more for pure enjoyment. Will probably pick back up on my GoodReads list, and use that as a way to measure success
4) Pick my photography back up a bit
- Apart from a couple of weddings in 2013 I didn’t do much with my camera kit. Partly because our current kitchen layout isn’t good for film processing, and partly due to just not having one with me a lot of the time. So, I’ve got a new work bag which will make it easier to carry a camera with me, and I’ll get the crack across the lens of my iPhone repaired so I can use that as well. Hopefully be updating my Flickr account a little more often.
5) Find and settle into a new home
- We’re currently renting a house. This years big task is to find and buy a home. Plenty of planning has been done, now we just need to find somewhere and enjoy the legal niceties!!
What are your goals and hopes for 2014? Feel free to share them with us in the comments.