Thursday, November 20, 2014

$Using: scope in DSC script resources

If yo have spent any time working with DSc resources, you have found yourself needing to use the Script resource for some things.

Script Script_Resource
{
    GetScript = {}
    SetScript = {}
    TestScript = {}
    DependsOn = ""
    Credential = ""
}

It is easy enough to use. Define your Get, Set, and Test PowerShell commands and you are all set. Everything is good until you need to work with passing in variables into your script block. You will quickly find that it does not work like you would expect.

This command will always return false:

$path = "c:\windows\temp"
Script Script_Resource
{
    TestScript = {
        Test-Path $path
    }
}

Because $path is not defined within the scope of the TestScript to a value, the Test-Path will return false. Take a look at the mof file and you can see this. 

instance of MSFT_ScriptResource as $MSFT_ScriptResource1ref
{
 ResourceID = "[Script]Script_Resource";
 TestScript = "\n Test-Path '$path'\n ";
 SourceInfo = "::7::9::Script";
 ModuleName = "PSDesiredStateConfiguration";
 ModuleVersion = "1.0";

};


I have found two ways to deal with this issue. If you think about it, the TestScript is just a string that gets ran on the target node. If you look at the resource, TestScript is defined as a String.

$path = "c:\windows\temp"
Script Script_Resource
{
    TestScript ="Test-Path '$path'"
}


This works really well when the command is a very simple one line script. Take a look at the mof file now.

instance of MSFT_ScriptResource as $MSFT_ScriptResource1ref
{
 ResourceID = "[Script]Script_Resource";
 TestScript = "Test-Path 'c:\\windows\\temp'";
 SourceInfo = "::6::9::Script";
 ModuleName = "PSDesiredStateConfiguration";
 ModuleVersion = "1.0";

}; 

This could end up very messy if the script block gets more complicated. What if you have variables that you want to define in the script and use some from the parent scope. You end up escaping things with that horrible back tick. But there is a better way.

This is where the $using: scope comes to the rescue. As far as I can tell, this is undocumented for use in script resources. But using it in Invoke-Command script blocks will allow you to reference variables in the parent scope. It works for our script resource too.

$path = "c:\windows\temp"
Script Script_Resource
{
    TestScript = {
        Test-Path $using:path
    }

Now when we dive into the mof file, we can see just how this magic works. Our $path gets inserted into the script as part of the script. 

instance of MSFT_ScriptResource as $MSFT_ScriptResource1ref
{
 ResourceID = "[Script]Script_Resource";
 TestScript = "$path='c:\\windows\\temp'\n Test-Path $path\n ";
 SourceInfo = "::7::9::Script";
 ModuleName = "PSDesiredStateConfiguration";
 ModuleVersion = "1.0";
};

The $using: scope is something I often overlook but this will be a very handy way to use it.

One final note about my examples. I did trim them down to minimize the code. If you want to recreate my tests, you will need to have the SetScript and GetScript properties defined for each script block.

Monday, November 03, 2014

Using Pester to validate DSC resources and configurations

The more I use Pester, the more I like it. I found some ways to leverage it in validating my use of DSC Resources and configurations. Here are some samples to give you an idea of what I am talking about.

Are my resources loaded?
Often I will create a new resource thinking it will work only to not have it loaded for some reason. At the moment all my resources are in the same module. So I have this test to use my folder structure to test for a loaded resource.

$LoadedResources = Get-DscResource

Describe “DSCResources located in $PSScriptRoot\DSCResources" {
    $ResourceList = Get-ChildItem "$PSScriptRoot\DSCResources"
   
    Foreach($Resource in $ResourceList){
        It "$Resource Is Loaded [Dynamic]" {
            $LoadedResources |
                Where-Object{$_.name -eq $Resource} |
                Should Not BeNullOrEmpty
        }
    }
}


Can  my resource generate a mof file?
The next thing I want to know is if it will generate a mof file. I create a test like this for every DSC resource. I use the TestDrive: location to manage temporary files.

Describe "Firewall"{
    It "Creates a mof file"{
        configuration DSCTest{
            Import-DscResource -modulename MyModule  
            Node Localhost{
               Firewall SampleConfig{
                    State = "ON"
               }
            }
        }
        DSCTest -OutputPath Testdrive:\dsc
        "TestDrive:\dsc\localhost.mof" | Should Exist
    }

Do my Test and Get functions work?
For some of my resources, I even test the Get and Test functions inside the module. I first have to rename the script so I can load it. Also notice I use a Mock function to keep the Export-Module from throwing errors.


$here = Split-Path -Parent $MyInvocation.MyCommand.Path

Describe "Firewall"{
    Copy-Item "$here\Firewall.psm1" TestDrive:\script.ps1
    Mock Export-ModuleMember {return $true}

    . "TestDrive:\script.ps1"
    It "Test-TargetResource returns true or false" {
        Test-TargetResource -state "ON" |
            Should Not BeNullOrEmpty
    }

    It "Get-TargetResource returns State = on or off" {
        (Get-TargetResource -state "ON").state |
            Should Match "on|off"
    }
}


Reading the Windows PowerShell Language Specification Version 3.0

Anytime I spend a good deal of time with a technology, I eventually track down the core documentation. The real technical stuff that shows you everything. I just rediscovered the Windows PowerShell Language Specification Version 3.0. It would be nice to have one for PowerShell 4.0, but I did not see it yet. I rediscovered a few things that I either missed before or I was not ready to understand them yet. Here are some hidden gems that I would like to share with you.

Wildcard attributes
We use these all the time for partial matches. The cool thing is that we can use them in file paths to skip over folders that may have any name. Let me show you how we can use this to fix an old problem of mine.

Our home folders are also the users my documents folder. Every time a user logs in, they create a desktop.ini file in that location. It does this cool trick of renaming the folder to say "My Documents". This is nice, unless you are an admin looking a list of 500 folders called "My Documents". Here is a quick fix to delete all of those files.

Get-ChildItem d:\profile\*\desktop.ini | Remove-Item -Force

ForEach-Object -Member
I can't say that I have many uses for this one, but I think it is interesting. The ForEach-Object has a -MemberName property. If you provide it an objects function name, it will call it for each object. Here is one good example of using it to uninstall software.

Get-WMIObject Win32_Product | Where-Object name -match Java | ForEach-Object -MemberName Uninstall

Or we can shorthand this same command even more like this:

gwmi win32_product | ? name -match Java | % uninstall

$PsBoundParameters
This variable contains a dictionary of all the parameters that were passed to the function or script. The cool thing is that you can splat these to another function. This would be great when you are creating a wrapper around a function that takes lots of parameters. I don;t have a good example off hand, but I know I have code out there that would be much cleaner with this


Thursday, July 24, 2014

Loading .Net Assemblys into Powershell

I have some functionality written in C# that I want to use in PowerShell. I compiled the code I wanted into a dll and copied it to my target system. Then I used this snip of code to load and use it:

[System.Reflection.Assembly]::LoadFrom(".\myProject.dll")
$myObject = New-Object myNamespace.myClass
$myObject.MyFunction()

If my functions return objects, I can use them just like I would any other object in PowerShell. I just found this as another way to use the .Net Framework.

Sunday, July 13, 2014

The compounding power of automation

I was recently reviewing some of my past automation and development projects. I took the time to calculate the man hours my projects saved the organization. Over the last 9 years it has added up to some substantial savings. I have directly saved 44,000+ man hours. Because those tasks were automated, the frequency of that work was increased. I estimate that over a 5 year window, my projects are doing the work of 215,000+ man hours.

I want to take a moment to point out this xkcd.com image below. I used the 5 year metric because of this chart.


I think every system administrator automates things all the time without thinking about it. I included several of those in my calculation.

Every week, we would make a copy of the production data onto a second server for reports. It would take me about an hour to create a one off backup, restore it to a second server, and run some post processing scripts. If I spent an hour each week over the last 9 years, it would have taken 468 hours of my time. No admin in their right mind is doing something like this by hand. I automated it and did something else more productive with those 468 hours.

The advantage of automating it was running it more often to give the business better access to the data. I made it a daily process and automated what would have been 2,340 man hours of time to do the same thing.

I have one project where I saved 4 seconds (80% improvement) off of 1.1 million actions. One automation script took my department out of the account provisioning process saving 270 hours over 3 years. I have another one that took someone 1 week to generate a set of report 4 times a year and I made the whole set process daily. There are 20+ projects where I saved the company time and made it more productive.

These savings are not imaginary. There are a few cases where staff resources were reassigned to other areas because of this automation. Part of the reason I got involved in many of these projects is because they took too much time and there had to be a better way. I am good at finding that better way.

Tuesday, July 01, 2014

Why do I have to wait for my computer to turn on? Can't it figure out when I need to use it?

One thing we have a lot of in IT is logs. We log everything and then some. I often look at this large collection of data and would like to do more with it. One set of our logs contains every time my users log on or off of a computer. I think I found a clever way to use that information.

We also have a lot of old computers. Sometimes they take longer than we want to start up. I love how fast Windows 8.1 handles things, but we don't have that installed everywhere. I know a lot of our users will turn the computer on first thing in the morning and go do other things while it starts. In some areas, the first person in starts everyone else's computer. Other people just never turn them off.

What if our computers knew that you started to use your computer at 8:00 every day. Why can't it just turn on at 7:45? Why can't we figure this out for every computer and just take care of it for our users? We can and here is how I did it.

I parse 6-8 weeks worth of logs that record every time someone gets logged on. I have user, computer, and time. For this, I don't care who the user is. I parse the log to figure out what the users computer usage pattern is. I assume a weekly cycle and that makes my results more accurate.

Then I have a script that runs all the time to check that list and wake up computers that will be used in the next 15 minutes. All of our computers are already configured for wake on Lan so that is how we start those computers.

I don't know if you have worked with Wake on Lan before, but it has it's own nuances that I will save for another post.

Monday, June 30, 2014

rundll32 printui.dll,PrintUIEntry to map network printers

Have you ever used rundll32 printui.dll,PrintUIEntry to map a network printer on a computer? This command can map a printer for every user and can also be used to remove that same mapping. When you think about it, this behavior is a little odd.

I say that because mapping printers is a user profile based action. If you want a printer to show up for everyone, you have to install it on the computer directly. But somehow this command gets around that.

I was writing a DSC component that used this command and I ran into an issue trying to verify a printer was mapped. CIM_Printer and Win32_Printer could not find the printer when ran in the context of DSC. I suspect that the system account can't see the printer. Once a user is logged in, the connection to that printer is established. I had to find another way to identify these mapped printers.

My first thought was to fire up the Sysinternals procmon.exe tool. Even after filtering it down to just rundll32 related activity, nothing jumped out to me. So I started searching the registry. It didn't take long and I found what I was looking for.

"HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Print\Connections"

That registry location contains the list of printer connections that should be mapped for each user. Those printers are kept separate from the locally installed printers on the system. I started checking this location and everything started working for me.

If you came here trying to map a printer in this way, here are the commands that I used

$path = "\\server\printer name"

# Map a printer
rundll32 printui.dll,PrintUIEntry /ga /n$path 

# remove the printer
rundll32 printui.dll,PrintUIEntry /gd /n$path

# list printer connections
rundll32 printui.dll,PrintUIEntry /ge