This month Mala has asked us to post about our Learning Goals for 2018. Now, there’s nothing like nailing your colours to the mast so you can come back in a year and see if you actually did something (though let’s not talk about my 2017 exercise plans!), so here goes:
Tech Learning Goals
Networking
I read TCP Illustrated years ago, so apart from IPv6 I’m pretty solid on the fundamentals. but the increasing number of ways these can be manipulated and managed has ballooned. And no matter where your data and applications are going to live in the future it’s still going to be essential to know how they can be connected. So I’m definitely pencilling in some time with our Network Infrastructure team to get an overview of what’s out there, and what they see as the future. I know I can bingle that sort of stuff, but it’s even better to get some professional advice
Then I’ll be hitting Pluralsight and other video and looking forward to building some very different labs to the ones I’m used to. For once the servers and the data won’t be the focus.
The aim is to sit in a meeting with our network team and keep up with everything! Though I might draw the line at memorising the Cisco model numbers like the seem to have
More Automation
I’ve increasingly automated a lot of my repetitive work tasks. But I need to automate a lot of the routine work that’s come in a filled up my ‘spare’ time. Now I’m as guilty as the next IT Pro about sticking to the easy fix for automation and sticking to technologies I know. Well, for 2018 I’m going to start branching out and make the automation to force me to learn some new technologies. So rather than sticking to PowerShell I’ll be looking at Python and Go in more depth, looking at more serverless Cloud technologies for data transformation and endpoints rather than spinning up more VMs or containers and trying to let other Cloud services take the strain rather than me.
If I get this right and make progress, then the other things on this list will come to pass. Freeing up time to work on the good stuff will be the biggest pay off.
New Personal skill
Improve my Writing
I’m not a natural writer by any stretch of the imagination, my natural inclination is numbers, equations and strange pencil diagrams that only I can decode. This tends to limit me when writing documentation, and blog posts take a disproportionate amount of time to write.
So I’m going to head to the Nottingham library to get some books on writing. And to make sure I put it into practice, I’m going to be keeping a journal from January 1st (don’t worry, it won’t be online!). Hopefully forcing myself to write everyday without an audience will help me with writing for an audience. So I’ve 2 weeks to find a nice pen and notebook to try and inspire my inner Samuel Pepys!
New non tech skill
Get better at marketing myself
In the classic British manner I’m happy to hid my lamp under a bushel. So for next year I’m going to try to put myself out there. I’m hoping this will be help along with some from improving my writing as above. I’m going to make more use of asking other tech friends to review session submissions with useful feedback. Asking for feedback from organisers on my submissions, including when I succeed so I can build on good things as well as fixing the bad.
And hoping that the increased blog posting from my better writing skills means I’ll have more content to share here, and from the new tech skills more interesting things to write about.
The metrics for this one will be the number of events I present at in 2018, the number of visitors to my blog.
Conclusions
To me that looks like a good solid set of skills to work on. They complement each other, so an improvement in one should help with another one. They can also be worked on at different rates, which stops things getting stressful if something has to be put aside due to other pressures.
I’ve been playing around with Ansible again for an incoming project. It involves Windows, DSC and SQL Server as well, so I needed WinRM available to do anything. Since I last played with Ansible installing pywinrm seems to have got a lot trickier. I followed a number of online guides, and kept banging up against this error:
Error
Downloading/unpacking cryptography>=1.3 (from requests-ntlm>=0.3.0->pywinrm)
Downloading cryptography-2.1.4.tar.gz (441kB): 441kB downloaded
Running setup.py (path:/tmp/pip_build_root/cryptography/setup.py) egg_info for package cryptography
error in cryptography setup command: Invalid environment marker: python_version < '3'
Complete output from command python setup.py egg_info:
error in cryptography setup command: Invalid environment marker: python_version < '3'
----------------------------------------
Cleaning up...
Command python setup.py egg_info failed with error code 1 in /tmp/pip_build_root/cryptography
Storing debug log for failure in /root/.pip/pip.log
This is on the Ubuntu/trusty64 image I’m using to build my control node with Packer.
With a couple of hours of bingling around I tracked it down to an issue with python’s setuptools package, which then meant an upgrade was needed to pip itself. The set of steps needed in the end is:
And this is now giving me a solid install each time, apart from a couple of deprecated warnings that aren’t anything that are going to affect my needs for a while.
Hopefully this will save someone else a day running around various github and support sites.
Interested in any form of PowerShell usage and based around the East Midlands and Nottingham? Then this could be the group for you. We’re looking to cover anything that uses (or can be used with) PowerShell. So topics that are fair game include:
AD Management
Scripting
Source Control
DevOps
Azure/AWS/Cloud Provider of your choice
Exchange management
SQL Server
Pester testing
Continuous Integration
.Net Internals
Generally anything that would interest someone using PowerShell or give their career a boost!
Nothing is set in stone yet as we want to get some feedback from potential members. There’s a date booked for the 8th Febuary (Kick Off meeting), but what happens is up to you.
Would you prefer a traditional usergroup with booked speakers given presentations in a formal setting, or something more informal like a roundtable/whiteboard sessions? Or perhaps half and half?
We’d love to know what you’d like to see or learn. So please either drop a comment below or sign up for the Kick Off Meeting and give us feedback on Meetup.
No matter how hard the dbatools team try, there’s always someone who wants to do things we’d never thought. This is one of the great things with getting feedback direct from a great community. Unfortunately a lot of these ideas are either too niche to implement, or would be a lot of complex code for a single use case
As part of the Restore-DbaDatabase stack rewrite, I wanted to do make things easier for users to be able to get their hands dirty within the Restore stack. Not necessarily needing to dive into the core code and the world of Github Pull Requests, but by manipulating the data flowing through the pipeline using standard PowerShell techniques All the while being able to do the heavy listing with out code.
So, below the fold we’ll be looking at some examples of how you can start going to town with your restores
Just a quick post for something I had to deal with for another post I’m writing. I wanted to do some work where I’d reuse a base array over a number of passes in a loop, and didn’t want any change made impacting on later iterations. If you’ve tried copying an array containing arrays or other objects in PowerShell in the usual manner then you’ll have come across this problem:
As is common in .Net, the copy is ‘Copy by Reference’ so you don’t get a nice shiny new independent array to play with. All that’s copied is the references to the original’s place in memory. Therefore any changes to either object affects both, as both variable names are looking at the same piece of memory. This is nice and efficient in terms of storage and speed of copying, but not great for my purposes.
There are various workarounds kicking around if you’re using simple arrays, but they tend to breakdown when you’ve got arrays that contain arrays or other PowerShell objects. My method for copying them is a little down and dirty, but it works 95% of the time for what I want. The trick is to Serialize the object, and then DeSerialize it into the new one:
And, voila we have the outcome I wanted. Just to make the line of code easier to read, here it is:
Now, I mentioned up front that this works ~95% of the time. The times it doesn’t work for me are when the underlying object type doesn’t serialize nicely. The most common one I come across is BigInt. This ‘deserializes’ back in as a non integer type and then won’t play nice when compared to other ‘real’ BigInt value, so make sure to check you have the values you think you should do
A new version of dbatools Restore-DbaDatabase command was released into the wild this week. One of the main aims of this release was to make it easier to debug failures in the restore process, and to drag information out of the pipeline easily (and anonymously) so we can increase our Pestering of the module with Unit and Integration tests.
So I’d like to share some of the features I’ve put in so you can take part.
The biggest change is that Restore-DbaDatabase is now a wrapper around 5 public functions. The 5 functions are:
Get-DbabackupInformation
Select-DbabackupInformation
Format–DbabackupInformation
Test–DbabackupInformation
Invoke-DbaAdvancedRestore
These can be used individually for advanced restore scenarios, I’ll go through some examples in a later post.
For now it’s enough to know that Restore-DbaDatabase is a wrapper around this pipeline:
and it’s other function is passing parameters into these sub functions as needed.
With version of Restore-DbaDatabase you were restricted to throwing data into one end, and seeing what came out of the other end, with some insight produced by Verbose messages. Now things can be stepped through, data extracted as need, and in a format that plugs straight into out testing functions.
Get-DbaBackupInformation
This is the function that gets all of the information about backup files. It scans the given paths, and uses Read-DbaBackupHeader to extract the information from them. This is stored in a dbatools BackupHistory object (this is the same as the output from Get-DbaBackupHistory, so we are standardising on a format for Backup information to be passed between functions).
So this would be a good place to check that you’ve gotten the files you think you should have, and is also the first place we’d be looking if you had a report of a break in the LSN chain
To get the output from the pipeline at this point we use the GetBackupInformation parameter:
Restore-DbaDatabase - -GetBackupInformation gbi
This will create a globally scoped variable $gbi containing the ouput from Get-DbaBackupHistory. Note, that when passing the name to Restore-DbaDatabase you do not need to specify the $.
If you want to stop execution at this point, then use the -StopAfterGetBackupInformation switch. This will stop Restore-DbaDatabase from going any further.
This is also a good way of saving time on future runs, as the BackupHistory object can be passed straight in, saving the overhead of reading all the file heasers again:
Here we filter down the output from Get-DbaBackupInformation to restore to the point in time requested, or the latest point we can. This means we find :
– the last full backup before the point in time
– the latest differential between the full backup and the point in time
– and then all transaction log backups to get us to the requested time
This is done for every database found in the BackupHistory object
Here is where we’d begin looking for issues if you had a ‘good’ LSN chain from Get-DbaBackupInformation and then it broke.
To get this data you use the SelectBackupInformation parameter, passing in the name of the variable you want to store the data in (without the $ as per GetBackupInformation above)
There is also a corresponsing StopAfterSelectBackupInformation switch to halt processing at this point. We stop processing at the first stop in the pipeline, so specifying multiple StopAfter* switches won’t have an effect
Format-DbaBackupInformation
This function performs the transforms on the BackupHistory object per the parameters pushed in. This includes renaming databases, and file moves and rename. For everything we touch we add an extra property of Orignal to the BackupHistory object. For example the original name of the database will be in OriginalDatabase, and the target name will be in Database
So this is a good spot to test why transforms aren’t working as expected.
To get data out at this pipeline stage use the FormatBackupInformation paramter with a variable name. And as normal it has an accompanying StopAfterFormatBackupInformation switch to halt things there
Test-DbaBackupInformation
Before passing the BackupHistory object off to be restored we do some checks to make sure everything is OK. The following checks are made:
LSN chain complete
Does a destination file exist, if owned by a different database then fail
Does a destination file exist, if owned by the database being restored is WithReplace specfied
Can SQL Server see and write to all the destination folders
Can SQL Server create any destination folders missing
Can SQL Server see all the backup files
If a database passes all these checks then it’s backup history is marked as restorable by the IsVerified property being set $True.
To get the data stream out at this point use the TestBackupInformation parameter.
General Errors with restores
Once we’re past these stages, then our error reporting is at the mercy of the SMO Restore class. This doesn’t always provide an obvious cause straight away. Usually the main error can be found with:
$error[1] | Select-Object *
We like to think we capture most restore failure scenarios nicely, but if you find something we don’t then please let you know, either on Slack or by raising a Github issue
As usually the dbatools terminating error will be in $error[0].
Providing the information for testing or debugging.
If you’re running in to problems then the dbatools team may ask you to provide the output from one of these stages so we can debug it, or incorporate the information into our tests.
Of course you won’t want to share confidential information with us, so we would recommend anonymising your data. My normal way of doing this is to use these 2 stubbing functions:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This method will anonymise the values in ComputerName, InstanceName, SqlInstance, Database, UserName, Path, FullName, FileList, OriginalDatabase, OriginalFileList, OriginalFullName and ReplaceDatabaseName. But will produce the same output for the same input, so we can work with multiple database sets at once.
I hope that’s been of some help. As always if you’ve a question then drop a comment below, ping me on twitter (@napalmgram) or raise an issue with dbatools on Slack or Github
Once again I’ve very happy to announce that the travelling SQL Server conference SQL Relay will be visiting Nottingham again on Tuesday 10th October 2017. We’ll once again be offering a whole day of free training around the Microsoft Data Platform covering topics like
SQL Server
Azure
Machine learning
Data Science
Azure Data Lake
PowerBI
and much more. Our last visits to Nottingham have gone down very well with great feedback from attendees.
For more information on sessions, and to register for this free event (we even provide lunch) please head here – Nottingham SQL Relay registration
And if you’re based in Nottingham and interested in SQL Server, then you may want to checkout out the Nottingham SQL Server user group as well, we’d love to welcome you along to future meetings – meetup.com/passeastmidlands/
The UK has a great range of techie events these days, but it’s always good to see another one starting up. PSDay.Uk is the first in an international one day PowerShell events.
The idea is to provide a local set of events that complement the 3 big international PowerShell conferences (PSConf.eu, PsConf.Asia and PowerShell Global Summit). This makes it easier for local attendees to get to events (no airfare, possibly no hotels, much easier to get agreement from employers), gives an opportunity for local speakers, but also provides a framework to let the organisers tap into the International speakers.
My good friend Rob has written more on the idea and formation on the organising body of which he’s a member over on his blog here – What’s a PSDay
The initial UK event is happening on Friday 22nd September 2017 at CodeSkills in London. Full details and tickets are available on the website – https://psday.uk/ and more information is available on Twitter (@PSDayUK) and Facebook
I’m completely chuffed at being picked to speak at this event as well. I’ll be presenting a new session titled “DevOps by the back door, via the Helpdesk”, where I’ll be sharing examples and ideas of how you can start your DevOps journey with small bite sized actions which will start to show your employer (and colleagues) that this stuff really does work.
Hope to see lots of people there. This sounds like a great project, so make sure you can say you were at the first one in years to come!
You’re writing a series of PowerShell functions that are to be used in a pipeline (pseudo code):
$Output = Get-Data | Where-Data
and you want to grab the output of each command but not interfere with the pipeline throughput? Say you want to see what Get-Data is getting and passing into Where-Data
function Get-Data{
[CmdletBinding()]
param(
[string]$Path
)
Process {
Get-ChildItem -Path $Path
}
}
So nothing too fancy, really just a wrapper around Get-ChildItem, but it gets me a decent data set that I can then filter. Now if this function was also doing some filtering of the data and we weren’t seeing the output we wanted in $ouput then it would be nice to grab the data before it’s passed on. To do this we’ll take a variable name as a parameter and then assign than within the function using Set-Variable:
Things to note in the function:
we use Set-Variable as this copes better if the variable already exists. Be careful of this as you will clobber the existing variable
Wrap Set-Variable in an if block, or it’ll throw when passed an empty variable name (if you wanted to be really solid, you’d want to put some validation around the parameter to make sure you only take in legal variable names.
We set the scope of the variable to Global to make sure we can see it outside of our command or module.
If you have a large function with lots of points where you’d like the option to return data from various points for future debugging, then you can always have multiple return variable parameters and populate them as you need them.
As a prelude to a post describing some new features of the dbatoolsRestore-DbaDatabase function, I thought I’d just got through how I create a time series database to give me something to test restoring to Point in Time with minimal overhead.
This sort of database can also be really handy for your own training if you want to play with point in time restores, or explore what can be done with standby restores.
Premise
I want to end up with:
A database – basic unit of restore
Single table – Let’s keep this small and simple
Table contains a series of rows written at a set time interval
So the table should end up like this:
StepID
DateStamp
3
08/06/2017 09:46:45
4
08/06/2017 09:47:15
We should end up with a nice spread of times so we can easily say ‘Restore to this point in time’ and then be able to check we’ve actually gotten there