Musings of a Data professional

Stuart Moore

Category: Uncategorized Page 2 of 3

Working around sqlcmd on Mac OS issues

So I’m busy working on my new dbaSecurityScan module, busily trying to write all the pester tests for the appveyor pipeline

Part of the testing for the module involves building a number of test scenarios, each of which of which needs a database spinning up for it.

On the Windows builds, that’s nice and easy as I just use this in the test script to loop through all the scenarios and run the .sql files :

However, I’m developing on my Macbook Pro as I want this module to be nicely platform agnostic. SQL Server is running nicely in docker and dbatools can connect to it happily. But I just can’t get sqlcmd to work, lots of TCP Provider: Error code 0x102 and TCP Provider: Error code 0x2AF9 messages no matter what I try. So being short on time I though I’d try something else, and hopefully this will work when I get Linux build running as well

The scripts I want to run create databases, users, schemas and whole lot more, so there’s lots of batch separators (; and GO) in them.Invoke-DbaQuery is a great function, but it doesn’t like working with batch separators (which is not through want of trying, it’s just really tricky). So time to drop back to some raw SMO to run the scripts in with ExecuteNonQuery()

I’ve added a variable to my builds that lets me pick when I want to run via SQLCMD or via SMO. At the minute I’m just using it on my machine, but if I run into problems later, it’s nice to now I can just toggle by looking at which platform it’s running.

Resetting conflicting DTC CIDs with PowerShell

I’m currently migrating a lot of SQL Server instances onto newer virtual machines. Quite a few of these instances talk to each other via Linked Server for various historical reasons. And a lot of that chat is done via distributed transactions, which means configuring MS Distributed Transaction Coordinator.

So one of the first things I need to do, is check that that DTC is working between the 2 boxes. This is dead simple with official PowerShell module for msdtc. It’s as simple as running Test-Dtc and then working through any errors. So, we just run this:

Test-Dtc -LocalComputerName Source -RemoteComputerName Destination

and as this is a post about fixing something, it won’t suprise you that I’m going to get an error message:

"The OleTx CID on SOURCE and DESTINATION is the same. The CID should be unique to each computer."
At C:\Windows\system32\WindowsPowerShell\v1.0\Modules\MsDtc\TestDtc.psm1:266 char:13
+             throw ([string]::Format($Strings.SameCids, "OleTx", $Loca ...
+             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : OperationStopped: ("The OleTx CID ...each computer.":String) [], RuntimeException
    + FullyQualifiedErrorId : "The OleTx CID on SOURCE and DESTINATION is the same. The CID should be uniqu
   e to each computer."

But in much more fetching shared of red and yellow, announcing that “The OleTx CID on SOURCE and DESTINATION is the same. The CID should be unique to each computer.”

The cause is really simple to grasp. When the Distributed Transaction Coordinator is installed it registers a GUID to identify it, the theory being a GUID clash should be a vanishingly rare occurance.

That is, until someone’s cloning Virtual Machines. So I have a batch of shiny new VMs that all think they’re the same instance of DTC. That’s not so good. It used to be the accepted fix was to manually remove the the distributed transaction coordinator, clean the registry, restart and then reinstall everything. That sounds like a lot of work to me!

The msdtc module makes it very simple to do, so we’re staring off here:

PS C:\Windows\system32> Get-Dtc | Select-Object *


DtcName               : Local
KtmRmEndpointCid      : 72c409a9-9c7b-4d24-9e0c-b946a2e5aa4c
OleTxEndpointCid      : 3eb9ce34-4d2c-48cf-9ebe-d6e888f9b0ca
Status                : Started
UisEndpointCid        : c5f16d32-01c9-4b65-be57-4521fa4bb934
VirtualServerName     : SOURCE
XAEndpointCid         : fcec2fe2-eab2-4277-853a-6ea4d7736430
PSComputerName        :
CimClass              : root/MsDtc:DtcInstance
CimInstanceProperties : {DtcName, KtmRmEndpointCid, OleTxEndpointCid, Status...}
CimSystemProperties   : Microsoft.Management.Infrastructure.CimSystemProperties

While DTC normally installs with a default log path, we’ll just make sure and grab it first. Then it’s just as simple as Uninstall-DTC and Install-DTC:

$logPath = (Get-DtcLog).path
Uninstall-Dtc -Confirm:$false
Install-Dtc -LogPath $logPath -StartType AutoStart

And to check it’s worked, lets query dtc and check:

PS C:\Windows\system32> Get-Dtc | Select-Object *


DtcName               : Local
KtmRmEndpointCid      : 72c409a9-9c7b-4d24-9e0c-b946a2e5aa4c
OleTxEndpointCid      : 3eb9ce34-4d2c-48cf-9ebe-d6e888f9b0ca
Status                : Started
UisEndpointCid        : c5f16d32-01c9-4b65-be57-4521fa4bb934
VirtualServerName     : VMCLSTR-IVANTI
XAEndpointCid         : fcec2fe2-eab2-4277-853a-6ea4d7736430
PSComputerName        :
CimClass              : root/MsDtc:DtcInstance
CimInstanceProperties : {DtcName, KtmRmEndpointCid, OleTxEndpointCid, Status...}
CimSystemProperties   : Microsoft.Management.Infrastructure.CimSystemProperties

And there we have a new unique OleTxEndPointCid and we’re good to go


Using Log or Audit data responsibly

A couple of days ago I walked through getting information about data deletion uing the SQL Server Transaction Log. We looked how you could find when and by whom data had been removed from a SQL Server table.

So you’ve got the when, and you’ve got the who, now how do you precede? Remember you could be holding someone’s job or professional reputation in your hands.

As a long time DBA I’m well aware that applications are not always the best written things, and instructions don’t always spell out correctly what an option is going to do.

In the case I was writing about I withheld the name from general knowledge, a single senior manager was made aware that I knew and would be investigating. Why did I do this?

In the case of this particular third party application , there were a couple.

  • Deleting data within the app should write data to an internal audit trail, this hadn’t
  • Users cannot generate their own SQL queries, so this wasn’t an ad hoc query either
  • The single row delete was in it’s own transaction
  • This particular app is badly written. We’ve seen it fail to correctly write it’s own ini files on exit
  • It has a habit of crashing out for no reason

So a bit like a lot of applications out there. Now I had some time to investigate I started by going through logs on the client Windows machine and trying to replicate on my own.

Eventually I traced it back to a coding error in the application where they processed an Update as:

  1. Delete old data
  2. Insert new data
  3. Write audit log

None of this happened in an encompassing transaction (from studying the Transaction Log). So when the app crashed just after step 1, the row was just left missing and nothing there.

To the user who was logged in at the time, they wouldn’t have been aware of the crash as the GUI doesn’t display app status. The crash also happened at 16:55, which is pretty much home time so they wouldn’t have looked at the record again

Now I had the evidence I could talk to the Senior manager and show that we needed to raise this with the supplier, and that the user was blameless, but that we should recommend to users that they should double check any updates to be sure.

Releasing the name to a less trusted source without this background information would have probably led to the user being blamed for something that wasn’t really their fault

So, just because you have the information it doesn’t mean it’s actionable with adding more context and knowledge around it.

Now with added CISSP

It’s all been a bit quiet around here with lots on at work and training for a marathon. One piece of new I’ve not menitoned is that I’m now CISSP certified 👍. I sat the exam in December, but it’s taken a while for the paperwork and accreditation to get sorted and for everything to become official

The main thing I’ve taken away from studying for the certification is that process rules everything. As an in the trenches DBA we’re more worried about the specifics of SQL Server permissions or the Oracle auiting is correct. But why are we doing those things, and are we doing them correctly?

Correctly doesn’t just mean technically correct, we’re almost certainly doing them that way or things will break. What I mean is are we correctly implementing the processes and policies that drive the rest of the business?

For example, everyone loves backups (yeah, I know, this is my favourite thing as well). But how much should you be keeping? Do you really need 3 years of backups? Is that just increasing the amount of data you could loose in a breach? Are the older backups encrypted, and would restoring and encrypting them break the purposes they were kept for?

If you’re never going to use them why are you keeping them? If it’s just for a CYA audit reason, then why not just keep the audit logs? Less chance of leaking PII or Financial data if you’ve just got the bare bones of X did Y on Z.

This alignment with Organisational policy is a core reqiurement for doing SQL Server security correctly. It will allow you to concentrate on exactly what needs to be done and not just running around implementing ad-hoc fixes every time a hole appears.

Expect more posts on this topic over the coming months. And if you want a deep dive into SQL Server Security then I’m presenting a full day workshop at SqlBits 2020SQL Server Security from the Groud Up on Wednesday 1st April

SQLBits logo

SQL Server Security from the ground up at SQLBits 2020

I’m pleased to announce the I will be presenting a full day workshop at SQLBits 2020 on Wednesday 1st April.

The topic for the day is ‘SQL Server Security from the bottom up’.

We’ll be looking at what is required to ensure that the data stored SQL Server is secure, and that your organisation can trust that data in it’s mission.

This is more than just a technical workshop. We’ll be spendin time looking into how you can’t secure data without the organisation buying in to the process. To generate a working security policy you’re going to need approval from the top, if your CEO isn’t willing to enforce security then you’re fighting a losing battle. So you need to know how to present an argument at that level for the appropriate level of security and the resources to implement it.

So topics we’ll be covering will includer

  • Data Ownership
  • Risk Analysis
  • Seperation of Duties
  • Policies and Responsibilities
  • Cost of Security
  • What is out of your hands
  • Organisation Education

Don’t worry, there’ll be plenty of techical content as well. We’ll be looking into

  • Cloud vs On Premise
  • Setting up the operating, system if you have one
  • Setting up SQL Server
  • Permissions
  • Development best practices
  • Encryption
  • and much more

The sessions is aimed at all levels of SQL DBA, Developer or anyone who has to ensure the security of data. No previous experience is expected. Any technical examples will be provided so you can work with them on your own time, or take them back to show your colleagues

Until 31st December the price for 2 full training days and 3 days of conference sessions is £999, moving up to £1199, and then £1499 from the 15 Febuary, so get in quick for a barging

If you’ve any questions then please drop me a comment, reply below, or get in touch via Twitter.

Prevent mistakes with Azure Resource Locks

Sometimes you have to give people a little more access to an Azure environment than you might like, and then there’s the chance of someone accidentally deleting a resource.

A resource deletion may not sound like too much of a big thing if you’re deploying Infrastructure as code, hey we’ll just terraform apply again and it’ll pop backup.

In theory that’s a great idea, just with one big problem. The new resource isn’t the old resource!

For an example, an Azure SQL Database server is a unique resource. If you delete one you lose any backups you’ve taken as they’re hosted on the server. Spinning up a new one isn’t going to get them back! A phone call to MS Support may if you’re quick and lucky

To avoid this you want to be user Azure Resource Locks. Think of these as the Azure version of child proof locks on your kitchen drawers. Yes, they may occaisonally mean you’ve got an extra step to get a knife out, but the little on can’t get their hands on it.

Auzre Resource Locks

First thing about Azure Resource Locks is that they apply to everyone and every role. Even if you’ve the Owner role on a Resource Group via RBAC, if there’s an Azure Resource Lock on that Resource Group you’re going to be blocked until you’ve removed the lock

This is great because it prevents those “oh ****, that was the wrong subscription” moments

Locks apply downwards from the resource they’re applied to. So if you apply one on a Resource group then it’s lock applies to every resource within that resource group. Apply it to an Azure SQL Database server, and it will apply to all of the Databases on that server.

Azure Resource Lock Types

Resource locks come in 2 flavours

  • CanNotDelete
  • ReadOnly

CanNotDelete does what it says on the tin. Once this lock is applied the resource (and it’s children) can not be deleted, even if you use -force

ReadOnly implements CanNotDelete and also prevents any modification of the locked resource and it’s children

Setting Azure Resource Locks

You can set Azure Resource Locks via the Azure Portal, Azure CLI, Azure Powershell or ARM Templates. Below are how you can set the same CanNotDelete lock on the Lock Resource Group using each of the 4 options:

  • Azure Portal
  • ARM Template

Create a template.json file:

{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {},
    "variables": {},
    "resources": [
        {
            "type": "Microsoft.Authorization/locks",
            "apiVersion": "2015-01-01",
            "name": "LockGroupNotDelete",
            "properties":
            {
                "level": "CanNotDelete",
                "notes": ""
            }
        }
    ],
    "outputs": {}
}

Which you’d deploy with:

New-AzResourceGroupDeployment -ResourceGroupName lock -Name lock -TemplateFile ./template.json
  • Azure CLI:
az lock create --name LockGroupNotDelete --lock-type CanNotDelete --resource-group Lock
  • Azure PowerShell:
New-AzResourceLock -LockName LockGroupNotDelete -LockLevel CanNotDelete -ResourceGroupName Lock

What you’ll see with Azure Resource Locks

So now we’ve seen how to create a resource lock, what are going to see if we try to delete the resourcegroup, just to prove it works and also so we know what to look out for when we bump into one we didn’t expect to see

  • Azure Portal

Azure CLI

Azure PowerShell

As you can see the Resource Locks will stop you deleting the resource, which is nice. The errors messages are also nice and informative, so you know the resource is locked and at which scope the lock is placed. Which makes it easier to find the lock to remove it. Talking of removing locks:

Removing Azure Resource Locks

You can remove locks with any of the methods you can use to create them, so you’re free to mix and match how you do things.

  • Azure Portal
  • Azure CLI
az lock delete --name LockGroupNotDelete --resource-group Lock
  • Azure PowerShell
Remove-AzResourceLock -ResourceGroupName lock -LockName LockGroupNotDelete

dbatools’ Copy-DbaDatabase and Start-DbaMigration now supporting Azure Storage and Azure Managed Instance

Pleased to announce that we’ve now added Azure storage support to all the dbatools commands that copy databases around. This is part of our march towards full support for migrating on premise SQL Instances to Azure SQL Managed Instances

These changes also make it much much simpler to copy or migrate databases across SQL Server instances in different Active Directory domains.

Prerequisites

The features in this post are available in dbatool’s main production branch from version 0.9.807 onwards

You’ll need an Azure Storage account set up, previous blog posts on doing that can be found at:

I’d highly recommend you use the Shared Access Signature method. 2 main reasons:

  • Access Key auth is deprecated in SQL Server 2016+
  • You can’t stripe or use mutiple storage accounts with Access Keys, but you can with Shared Access Signatures

Copy-DbaDatabase to Azure Managed Instance

So now you’ve got credentials on the source and destination instances, and a working storage account you’re ready to use Copy-DbaDatabase to move your database up to a Managed instance.

If you’re using Shared Access Secrets, then it’s a simple as using your storage account URL as the SharedPath:

$copyParams = @{
    Source = "Server1"
    Destination = "MyNewMI.public.cus29s972e4513d6.database.windows.net,3342"
    DestinationCredential = $cre
    SharedPath = "https://azblogdemo.blob.core.windows.net/sql"
    BackupRestore = $True
}
Copy-DbaDatabase @copyParams

The only difference is that you now pass in the URL to your blob storage account rather than a UNC path to SharedPath

If you are using Access Keys, then you will need to pass the name of the SQL credential holding them in via the AzureCredential parameter

$copyParams = @{
    Source = "Server1"
    Destination = "MyNewMI.public.cus29s972e4513d6.database.windows.net,3342"
    DestinationCredential = $cre
    SharedPath = "https://azblogdemo.blob.core.windows.net/sql"
    BackupRestore = $True
    AzureCredential = "azblogdemo"
}
Copy-DbaDatabase @copyParams

All the other parameters you’d use with Copy-DbaDatabase other than ReuseSourceFolderStructure are available. That one’s not used as Managed Instances don’t allow user to specify paths

You may still get errors when copying certain databases. At the moment we’re not checking all the internal features of a database to make sure they are compatible with Azure SQL Managed Instances.

Start-DbaMigration with Azure Managed Instances

With Copy-DbaDatabase now supporting Managed Instances we can also start making Start-DbaMigration supporting them as well.

This is a preview of where we’re going as this particular command has more moving pieces that we need to make sure are properly compliant with the differences between on premise instances and Managed Instances as linked to above.

The sections working so far are:

  • Copy-DbaDatabase, as explained above
  • Copy-DbaLogin, this will copy SQL Server logins across but not Windows logins as they are not supported in MI yet

Bonus news for cross domain transfers

A common query on the #dbatools channel on SQL Community is how to copy a database or migrate an instance between 2 SQL Instances in different Active Directory domains with no trust relationship. The SQL Server connection is fairly simple as we can use SQL Server Logins, but the shared storage is trickier.

Permissions have to be very loose, and deciding how to sort the networking out can take a while

The good news is that using Azure Storage get’s around all of those issues. Create yourself an account, create the credentials on the source and destination instances and you’re good to go.

It’s all secure. The backups are encrypted on Azure Storage, and are only accessible to those people you allow (even Microsoft can’t see them unless you give them your keys). All the file transfer is done under 2048 bit SSL encryption. Which makes it safer than having a non domain joined sever with a share opened up to everyone/everyone

Wrap up

This is iteration one in a process that we hope will make migrating to a Managed Instance as easy as we’ve made migrating across SQL instances in your own Data Centres.

Please feedback any errors or issues you come across, and we’d love to hear you feedback and ideas for improvements..

Feel free to leave them in the comment below, or drop them in to the #dbatools channel on SQL Community, or raise an issue on Github

Fixing things when you push changes to the wrong branch on Github

If you’re in a panic there’s a TL;DR version at the bottom of the page

We’ve all done it. Working for ages tracking down that elusive bug in a project. Diligently committing away on our local repo as we make small changes. We’ve found the convoluted 50 lines of tortured logic, replaced it with 5 simple easy to read lines of code and all the test have passed. So we push it backup to github and wander off to grab some a snack as a reward

Halfway to the snacks you suddenly have a nagging doubt in the back of your mind that you don’t remember starting a new branch before starting on the bug hunt.

Snack forgotten you hustle back to your desk, mistype your password twice as you try to login, and there it is. You’ve pushed directly into the main branch!

Fear and embarassment kick in. Hell, how am I going to fix this! And how much of an idiot am I going to look!

Been there, done that, got enough T-Shirts I really don’t need to buy clothes.

First off, don’t panic. You aren’t the first, and you won’t be the last. We’ll walk through some simple steps to recover from this moden faux pas (I don’t think Debrett’s cover this situation yet, but if they’d like to I’m happy to syndicate)

First off, jump on Slack/Teams/Email or whatever you use to communicate across the project to let everyone know what you’ve done. It’s much less embarrassing to do it now that when someone’s forked off of your unchecked code, or checks something in and merges with it

Now you need to get your commits out of main branch. As these have been committed there’s 2 git options to look at, revert or reset. As this is a public SNAFU then revert is the correct way to do this

A git revert creates a commit that undoes the commits you’re reverting so everything is logged in the history.

git reset will remove the evidence it ever happened from the commit history, which makes it hard to pinpoint errors if you don’t get it perfectly right

This doesn’t mean a git revert isn’t without it’s issues or potential pitfalls. But, because you alerted everyone in step 1 (you did fess up didn’t you?) you’ve got some breathing space, so take it slowly and don’t screw it up again

First you need to work out how far back you need to revert to. Your options are to revert a set number of commits or to revert to a specific commit

To revert a x commits, you’d use this syntax

git revert HEAD-x

So to revert 2 commits you’d use:

git revert HEAD-2

But how would you know how many commits to revert? That leads into the next bit, working out which commit to revert to. There are 2 options here, github or git log

Getting Commit ID from git log

git log is the more complex way of doing this, but is much more flexible. And if you’re like me and spend most of your git time on the command line then it saves finding the mouse and a different window. We’ll look at 2 command line options that should be enough to sort out anything but the worst messages

Running git log on it’s own produces a lot of verbose output:

git log showing verbose output

That’ll keep paging through as much history in that branch as has been kept. To exit just hit q

For some brevity we use the –oneline switch to just return the basic facts:

Using the oneline switch with git log

This is much easier to read, but still returns everything in the history. So let’s reduce that with the -x parameter, where x is an integer saying how many lines you want returning:

restricting the number of row git log returns

Hopefully if you’ve caught the issue quickly and warned everyone else of the issue you won’t have too many commits you need to scan. For the revert you need enough of the Commit ID to be unique, which –oneline gives you

Getting Commit ID from Github

As you’ve pushed the commits to github, github offers a nice way to see the commit history. Below is a snapshot of a repository I’ve just had to revert because I wasn’t paying attention.

To get the commit history, click on the highlighted number of commits.

Where to find git commit history on github

This pops up a history list of all the commits with their commit hash:

git history in github

There’s a handy clipboard button next to each commit to grab the Commit ID for the revert

Reverting the Commits

Now we need to revert the unwanted commits. There are 2 options. You can revert a specific number of commits from the current HEAD of the branch. For example, to revert the last 2 commits in the branch you’d use:

git revert HEAD-2

or you can revert to a specific commit id

git revert 93cd242

Personally, I always go with the commit hash. When you’re in a bit of a panic the first thing that goes for me is counting, so having an exact value to revert to saves a lot of double checking

And you can revert commit by commit if you wanted. In the example I’ve been using here I wanted to make sure exactly what was being taken out of files as I went along. So if you look a the git history from after I’d fixed things you can see 2 reverts at the top of the history

Git log history showing revert results

So now you’re reverted your local repository, the next thing you need to do it to push the changes back up to the github repository. I know you know how to do this as that’s what caused this issue in the first place!

Finally, let everyone know that they can use the main branch again and make a not to check in future

TL;DR

Let everyone know the branch is not right to stop any pulling/pushing/branching from it

Then get the commit history:

git log --oneline -20

Then revert to the CommitID before the wrong commits:

git revert <CommitID>

Finally push everything back to the origin:

git push 

Let everyone know the main branch is safe to use again.

Incoming for 2019

It’s the start of a new year, so time to look at the things I’ve got coming up:

Personally

Just started booking things in for 2019, but already got a couple of things booked in that I’m really looking forward to:

SQLBits 2019

I’ve been going to SqlBits for years, and this year I’ve been lucky enough to be picked to speak. I’ll be presenting my session ‘Slack for the DBA’, about why and how you can use ChatOps techniques in your day to day database support to speed things up.

Not sure which day I’ll be speaking on, but I’ll be there for Friday and Saturday if you want to say hi. Hopefully there for the traditional 5k SQL Run on the Friday morning as well

SQL Saturday Iceland

As I’ll be passing over on the way to the next event I thought it’d be rude not to stop off for this as well. Plus an excuse to finally visit Iceland

MVP Global Summit

The yearly gathering of Microsoft MVPs. Really looking forward to this. Meeting other MVPs, getting to talk to the MS product teams, and getting some in depth information.

Others

Just starting to look at other events, and going to be booking in some Usergroup speaking slots as well. So if you’re looking for a Data Platform or PowerShell speaker for you user group then please get in touch

Nottingham SQL Server User Group

The Nottingham SQL Server User group is back for its bi-monthly meetings in 2019. The dates for 2019 are:

All the dates link to the individual meetup events where you can find more details on that evenings happenings. We’re picking up some quality speakers and topics for 2019. We welcome anyone who works with the MS Data Platform from beginner to expert, and we’re keen to hear from all speakers from first timers to those who’ve been around the international block 😉

Nottingham PowerShell Usergoup

And the Nottingham PowerShell Usergroup is back for 6 meetings in 2019 as well. This will be our second year hosting these event, and we’ve gradually built up a strong core audience, but we’d love to meet more PowerShell people. As normal we’ll be covering a wide range of topics that will appeal to all sorts of PowerShell users. The dates for 2019 are below, each being a link to the meetup for that date:

Again it’d be great to hear from you if you’d like to speak. Any PowerShell topic is more than welcome, and I’m more than happy to help you work on any ideas you have and get you started with your speaking career

PsConf.Eu – a quick review

This was my second trip to Hannover for PsConf.EU, a 4 day conference covering every corner of PowerShell.

Sessions

Let’s start with the most important part of a conference, the content. Just a quick scan of the Agenda gave a hint of the quality lined up. With sessions from the PowerShell Dev team, and deep dives from community maestros there was plenty of highly technical content, but mixed up with some easier intros to new topics.

For me some of the highlights were getting a first view of some of the new features coming out from the DSC team for the LCM previews straight from the horse’s mouth (well, <a href=’https://social.msdn.microsoft.com/profile/Michael+Greene>Michael Greene the team leader), more details here. And the look at Pester internals from Jakub Jares (b | t) was good as well, now feel I know a lot more about the underpinning, and the reasons behind some of the ways you have to use it make much more sense.

No bad sessions from my point of view. A few issues with demos. Just from my point as a speaker, if you’re going to run a fragile multi machine demo, record a good version ahead of time. While it’s always good to do a demo live, with those setups once it’s gone wrong its’s very hard to recover.

The days run long, 08:30 – 20:00 on the wednesday with the 3 hour workshops at the end. For me this a bonus. When I’m at a conference I like to concentrate on the learning. Some conferences I’ve been to it’s felt as those you spend almost as much time at lunch or coffee breaks as you do in the sessions. So while there were long days it felt worth it.

The venue is well lit, the AV is great, even from the back of the room the screens were clear and the PA carrried the speaker right back. The rooms were closer together this year, and easier to navigate between. And despite the temperature being a step up from the UK the rooms were kept a pleasant but not overly cold temperature. Plenty of food and drink, the lunch menus weren’t holding anything back.

Community

I’m lucky enough to know quite a few PowerShell community members already, but met plenty more at PSConf.EU. Everyone’s there for the same reason so it was always easy to spark up a conversation with someone, whether sat next to them waiting for a session or in the queue for lunch. Was good to see the number of PS UserGroups that were teased out over a lunchtime breakout session, hopefully it’ll be up on the web shortly.

Party

Tuesday night was party night this year. Back to the Hannover Zoo for an evening of catching up and making new friends. The boat is pushed out with the venue, the demonstration from the Zoo (this year Polar Bear feeding), free food and beer. Don’t think anyone went hungry or thirsty that night.

Logistics

From the UK this is a pretty easy conference to get to. From Nottingham I took the train down to Heathrow, a 1 hour hop over with British Airways and a 20 minute train journey saw me in the centre of Hannover. Taxis are about ¢12 from the station to the Conference hotel, but as I’d spent a lot of time sitting down I decided to take the 25 minute walk across.

Accomodation

This year I stayed in the conference hotel the Congress Hotel Am StadtPark next door to the main conference venue. Good hotel, room was a decent size and had all the usual features. The bar was reasonably priced and usually full of PowerShell geeks if you wanted a chat over beers. Restaurant was good, but not cheap. The only downside was flaky WiFi, but that improved on Saturday once all the phones, tablets and laptops from the conference had left for some reason…..

Another great year from Tobias and his team. They’ve announced the dates for next year, 4th-7th June 2019 and they’ve been written into the calendar with ink as I’ll definitely be heading back.

Page 2 of 3

Powered by WordPress & Theme by Anders Norén