Tuesday, December 02, 2014

Using Pester to validate DSC resources and configurations Part 2

Pester tests are like any other script. They grow and evolve over time. Here are a few more tests that I have testing my DSC resources and configurations that I recently added to my collection. 

Does every resource have a pester test?
This is probably one of the most important tests I have. Every resource should have a test, so why not test for that?

describe "DSCResources located in $PSScriptRoot\DSCResources" {

  foreach($Resource in $ResourceList)
    context $Resource.name {

      it "Has a pester test" {

        ($Resource.fullname + "\*.test.ps1") | should exist

If it is a standard resource, does it have the files it needs?
Each DSC resource needs to have two files in it. A *.psm1 file and a *.schema.mof file. I use the *.psm1 file as a quick way to identify standard resources differently than a composite resource. I know I will not ever reach a test condition that would cause once of these to fail, but I left it in place so I could change the logic later.

if(Test-Path ($Resource.fullname + "\$Resource.psm1"))

  it "Has a $Resource.schema.mof" {
    ($Resource.fullname + "\$Resource.schema.mof") | should exist
  it "Has a $Resource.psm1" {
    ($Resource.fullname + "\$Resource.psm1") | should exist

Does it pass Test-xDscSchema and Test-xDscResource tests?
I may as well test for these as part of my pester tests. They already validate a lot of things that are easy to overlook.

it "Passes Test-xDscSchema *.schema.mof" {
  Test-xDscSchema ($Resource.fullname + "\$Resource.schema.mof") | should be true
it "Passes Test-xDscResource" {
  Test-xDscResource $Resource.fullname | should be true

If it is a composite resource, does it have the required files?
A composite resource uses different files than a standard resource. It has a *.psd1 and a *.shema.psm1 that should exists. I don’t have any Test-xDSC functions for the composite resources so I add a few extra checks. I verify that the *.psd1 file references the *.psm1 and that the module does not throw any errors when dot sourcing it.

  it "Has a $Resource.schema.psm1" {
    ($Resource.fullname + "\$Resource.schema.psm1") | should exist
  it "Has a $Resource.psd1" {
    ($Resource.fullname + "\$Resource.psd1") | should exist
  it "Has a psd1 that loads the schema.psm1" {
    ($Resource.fullname + "\$Resource.psd1") | should contain "$Resource.schema.psm1"
  it "dot-sourcing should not throw an error" {
    $path = ($Resource.fullname + "\$Resource.schema.psm1")
    { Invoke-expression (Get-Content $path -raw) } | should not throw

I hope you find this examples useful. If you want to see more, take a look at part 1.

Tuesday, November 25, 2014

Setting HKey_Curent_User with a DSC resource

I built a fun new resource for managing registry settings. “DSC already has a resource for managing the registry” you say? This one sets values to user registry settings for all users.

    KevMar_UserRegistry DisableScreenSaver
        ID        = "DisableScreenSaver"
        Key       = "HKEY_CURRENT_USER\Control Panel\Desktop"
        ValueName = "ScreenSaveActive"
        ValueData = "0"

How cool is that? The built in DSC registry resource can only manage system settings. For servers this is all you really need. But if you have to manage user settings for some reason, forget about it. You need to use my resource to do it.

There are several limitations with my implementation to understand before we dive into how it works.

First is that this setting applies to all existing users and every new user once it is set. So if you remove this setting from future configurations instead of using the Ensure = "Absent" option, new users to the system will continue to get the setting. The good news is that using Ensure = “Absent” does stop this from applying to new users.

Second is that this sets the value only once per user. This kind of breaks the idea of DSC maintaining configuration drift. If this needs to get reapplied, there is a version attribute that must be used and incremented. Each user keeps track of what version of the setting they have applied. Increasing the version signals the user that something has changed and it needs to be set again. This is important if you are changing the ValueData  to something different.

Third these registry settings are only applied at user logon. I am using a method that hooks into the user logon process to apply the registry settings. I do not flag a reboot to DSC. I considered it but if you are starting to manage user settings, there can be a huge number of these in your configurations. Requiring a reboot for each one feels like a bit much. In my use case, I did not want the reboot. This is why making it as Absent can stop if from applying to any more users.

I’ll do a write up about how I did this in a future post. I used a Windows feature that is not very well known to most systems admins. I have it posted over at https://github.com/kmarquette/Powershell/tree/master/DSCModules/KevMar/DSCResources if you want to check it out.

Monday, November 24, 2014

Use Show-Command for a quick PowerShell GUI

We all love PowerShell and know how awesome it is. But not everyone we work with is as willing to drop to the shell as we are. The good news is there is a very easy way to give them the GUI they think they need. Write your advanced function like you already do and have them run it with Show-Command.

Show-Command Set-Service
This beautiful little command pops up and asks them for the information needed to run the command. Give it a try with Show-Command Set-Service

Not only are the required values marked, you even get a drop down box for some options. This works if you target *.ps1 files too.

As cool as this is, Show-Command gets better. Run it on its own and it gives you a list of all commands on the system. The filter makes it very easy to find what you are looking for. 

Thursday, November 20, 2014

$Using: scope in DSC script resources

If you have spent any time working with DSC resources, you have found yourself needing to use the Script resource for some things.

Script Script_Resource
    GetScript = {}
    SetScript = {}
    TestScript = {}
    DependsOn = ""
    Credential = ""

It is easy enough to use. Define your Get, Set, and Test PowerShell commands and you are all set. Everything is good until you need to work with passing in variables into your script block. You will quickly find that it does not work like you would expect.

This command will always return false:

$path = "c:\windows\temp"
Script Script_Resource
    TestScript = {
        Test-Path $path

Because $path is not defined within the scope of the TestScript to a value, the Test-Path will return false. Take a look at the mof file and you can see this. 

instance of MSFT_ScriptResource as $MSFT_ScriptResource1ref
 ResourceID = "[Script]Script_Resource";
 TestScript = "\n Test-Path '$path'\n ";
 SourceInfo = "::7::9::Script";
 ModuleName = "PSDesiredStateConfiguration";
 ModuleVersion = "1.0";


I have found two ways to deal with this issue. If you think about it, the TestScript is just a string that gets ran on the target node. If you look at the resource, TestScript is defined as a String.

$path = "c:\windows\temp"
Script Script_Resource
    TestScript ="Test-Path '$path'"

This works really well when the command is a very simple one line script. Take a look at the mof file now.

instance of MSFT_ScriptResource as $MSFT_ScriptResource1ref
 ResourceID = "[Script]Script_Resource";
 TestScript = "Test-Path 'c:\\windows\\temp'";
 SourceInfo = "::6::9::Script";
 ModuleName = "PSDesiredStateConfiguration";
 ModuleVersion = "1.0";


This could end up very messy if the script block gets more complicated. What if you have variables that you want to define in the script and use some from the parent scope. You end up escaping things with that horrible back tick. But there is a better way.

This is where the $using: scope comes to the rescue. As far as I can tell, this is undocumented for use in script resources. But using it in Invoke-Command script blocks will allow you to reference variables in the parent scope. It works for our script resource too.

$path = "c:\windows\temp"
Script Script_Resource
    TestScript = {
        Test-Path $using:path

Now when we dive into the mof file, we can see just how this magic works. Our $path gets inserted into the script as part of the script. 

instance of MSFT_ScriptResource as $MSFT_ScriptResource1ref
 ResourceID = "[Script]Script_Resource";
 TestScript = "$path='c:\\windows\\temp'\n Test-Path $path\n ";
 SourceInfo = "::7::9::Script";
 ModuleName = "PSDesiredStateConfiguration";
 ModuleVersion = "1.0";

The $using: scope is something I often overlook but this will be a very handy way to use it.

One final note about my examples. I did trim them down to minimize the code. If you want to recreate my tests, you will need to have the SetScript and GetScript properties defined for each script block.

Monday, November 03, 2014

Using Pester to validate DSC resources and configurations

The more I use Pester, the more I like it. I found some ways to leverage it in validating my use of DSC Resources and configurations. Here are some samples to give you an idea of what I am talking about.

Are my resources loaded?
Often I will create a new resource thinking it will work only to not have it loaded for some reason. At the moment all my resources are in the same module. So I have this test to use my folder structure to test for a loaded resource.

$LoadedResources = Get-DscResource

Describe “DSCResources located in $PSScriptRoot\DSCResources" {
    $ResourceList = Get-ChildItem "$PSScriptRoot\DSCResources"
    Foreach($Resource in $ResourceList){
        It "$Resource Is Loaded [Dynamic]" {
            $LoadedResources |
                Where-Object{$_.name -eq $Resource} |
                Should Not BeNullOrEmpty

Can  my resource generate a mof file?
The next thing I want to know is if it will generate a mof file. I create a test like this for every DSC resource. I use the TestDrive: location to manage temporary files.

Describe "Firewall"{
    It "Creates a mof file"{
        configuration DSCTest{
            Import-DscResource -modulename MyModule  
            Node Localhost{
               Firewall SampleConfig{
                    State = "ON"
        DSCTest -OutputPath Testdrive:\dsc
        "TestDrive:\dsc\localhost.mof" | Should Exist

Do my Test and Get functions work?
For some of my resources, I even test the Get and Test functions inside the module. I first have to rename the script so I can load it. Also notice I use a Mock function to keep the Export-Module from throwing errors.

$here = Split-Path -Parent $MyInvocation.MyCommand.Path

Describe "Firewall"{
    Copy-Item "$here\Firewall.psm1" TestDrive:\script.ps1
    Mock Export-ModuleMember {return $true}

    . "TestDrive:\script.ps1"
    It "Test-TargetResource returns true or false" {
        Test-TargetResource -state "ON" |
            Should Not BeNullOrEmpty

    It "Get-TargetResource returns State = on or off" {
        (Get-TargetResource -state "ON").state |
            Should Match "on|off"

Edit: I added a part 2.

Reading the Windows PowerShell Language Specification Version 3.0

Anytime I spend a good deal of time with a technology, I eventually track down the core documentation. The real technical stuff that shows you everything. I just rediscovered the Windows PowerShell Language Specification Version 3.0. It would be nice to have one for PowerShell 4.0, but I did not see it yet. I rediscovered a few things that I either missed before or I was not ready to understand them yet. Here are some hidden gems that I would like to share with you.

Wildcard attributes
We use these all the time for partial matches. The cool thing is that we can use them in file paths to skip over folders that may have any name. Let me show you how we can use this to fix an old problem of mine.

Our home folders are also the users my documents folder. Every time a user logs in, they create a desktop.ini file in that location. It does this cool trick of renaming the folder to say "My Documents". This is nice, unless you are an admin looking a list of 500 folders called "My Documents". Here is a quick fix to delete all of those files.

Get-ChildItem d:\profile\*\desktop.ini | Remove-Item -Force

ForEach-Object -Member
I can't say that I have many uses for this one, but I think it is interesting. The ForEach-Object has a -MemberName property. If you provide it an objects function name, it will call it for each object. Here is one good example of using it to uninstall software.

Get-WMIObject Win32_Product | Where-Object name -match Java | ForEach-Object -MemberName Uninstall

Or we can shorthand this same command even more like this:

gwmi win32_product | ? name -match Java | % uninstall

This variable contains a dictionary of all the parameters that were passed to the function or script. The cool thing is that you can splat these to another function. This would be great when you are creating a wrapper around a function that takes lots of parameters. I don;t have a good example off hand, but I know I have code out there that would be much cleaner with this

Thursday, July 24, 2014

Loading .Net Assemblys into Powershell

I have some functionality written in C# that I want to use in PowerShell. I compiled the code I wanted into a dll and copied it to my target system. Then I used this snip of code to load and use it:

$myObject = New-Object myNamespace.myClass

If my functions return objects, I can use them just like I would any other object in PowerShell. I just found this as another way to use the .Net Framework.

Sunday, July 13, 2014

The compounding power of automation

I was recently reviewing some of my past automation and development projects. I took the time to calculate the man hours my projects saved the organization. Over the last 9 years it has added up to some substantial savings. I have directly saved 44,000+ man hours. Because those tasks were automated, the frequency of that work was increased. I estimate that over a 5 year window, my projects are doing the work of 215,000+ man hours.

I want to take a moment to point out this xkcd.com image below. I used the 5 year metric because of this chart.

I think every system administrator automates things all the time without thinking about it. I included several of those in my calculation.

Every week, we would make a copy of the production data onto a second server for reports. It would take me about an hour to create a one off backup, restore it to a second server, and run some post processing scripts. If I spent an hour each week over the last 9 years, it would have taken 468 hours of my time. No admin in their right mind is doing something like this by hand. I automated it and did something else more productive with those 468 hours.

The advantage of automating it was running it more often to give the business better access to the data. I made it a daily process and automated what would have been 2,340 man hours of time to do the same thing.

I have one project where I saved 4 seconds (80% improvement) off of 1.1 million actions. One automation script took my department out of the account provisioning process saving 270 hours over 3 years. I have another one that took someone 1 week to generate a set of report 4 times a year and I made the whole set process daily. There are 20+ projects where I saved the company time and made it more productive.

These savings are not imaginary. There are a few cases where staff resources were reassigned to other areas because of this automation. Part of the reason I got involved in many of these projects is because they took too much time and there had to be a better way. I am good at finding that better way.

Tuesday, July 01, 2014

Why do I have to wait for my computer to turn on? Can't it figure out when I need to use it?

One thing we have a lot of in IT is logs. We log everything and then some. I often look at this large collection of data and would like to do more with it. One set of our logs contains every time my users log on or off of a computer. I think I found a clever way to use that information.

We also have a lot of old computers. Sometimes they take longer than we want to start up. I love how fast Windows 8.1 handles things, but we don't have that installed everywhere. I know a lot of our users will turn the computer on first thing in the morning and go do other things while it starts. In some areas, the first person in starts everyone else's computer. Other people just never turn them off.

What if our computers knew that you started to use your computer at 8:00 every day. Why can't it just turn on at 7:45? Why can't we figure this out for every computer and just take care of it for our users? We can and here is how I did it.

I parse 6-8 weeks worth of logs that record every time someone gets logged on. I have user, computer, and time. For this, I don't care who the user is. I parse the log to figure out what the users computer usage pattern is. I assume a weekly cycle and that makes my results more accurate.

Then I have a script that runs all the time to check that list and wake up computers that will be used in the next 15 minutes. All of our computers are already configured for wake on Lan so that is how we start those computers.

I don't know if you have worked with Wake on Lan before, but it has it's own nuances that I will save for another post.

Monday, June 30, 2014

rundll32 printui.dll,PrintUIEntry to map network printers

Have you ever used rundll32 printui.dll,PrintUIEntry to map a network printer on a computer? This command can map a printer for every user and can also be used to remove that same mapping. When you think about it, this behavior is a little odd.

I say that because mapping printers is a user profile based action. If you want a printer to show up for everyone, you have to install it on the computer directly. But somehow this command gets around that.

I was writing a DSC component that used this command and I ran into an issue trying to verify a printer was mapped. CIM_Printer and Win32_Printer could not find the printer when ran in the context of DSC. I suspect that the system account can't see the printer. Once a user is logged in, the connection to that printer is established. I had to find another way to identify these mapped printers.

My first thought was to fire up the Sysinternals procmon.exe tool. Even after filtering it down to just rundll32 related activity, nothing jumped out to me. So I started searching the registry. It didn't take long and I found what I was looking for.

"HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Print\Connections"

That registry location contains the list of printer connections that should be mapped for each user. Those printers are kept separate from the locally installed printers on the system. I started checking this location and everything started working for me.

If you came here trying to map a printer in this way, here are the commands that I used

$path = "\\server\printer name"

# Map a printer
rundll32 printui.dll,PrintUIEntry /ga /n$path 

# remove the printer
rundll32 printui.dll,PrintUIEntry /gd /n$path

# list printer connections
rundll32 printui.dll,PrintUIEntry /ge

Wednesday, June 25, 2014

LocalConfigurationManager{ DebugMode = $true }

One issue that I ran into when writing my custom DSC resources is that the local resource manager would cache the modules I was working on. It took me a while to realize that was going on. That is when I discovered DebugMode for the local resource manager.

If you enable debug mode, then it will reload the module every time. This is off by default for performance reasons. You have to use a DSC configuration to configure this, but it is done in a slightly different way than normal DSC configurations. Technet has all the details on how to configure the local resource manager. That is where I pulled this sample code from.

If all you want to do is enable debug mode, here is the quick script to do that:

Configuration ExampleConfig
    Node "localhost"
            DebugMode = $true

# The following line invokes the configuration and creates a file called localhost.meta.mof at the specified path
ExampleConfig -OutputPath "c:\users\public\dsc"

# Notice the use of Set-DSCLocalConfigurationManager
Set-DscLocalConfigurationManager -Path "c:\users\public\dsc"

If you are not using WMF5 CTP Release, then you have to fall back on killing the process hosting DSC. Here is a quick script to do that:

# find the Process that is hosting the DSC engine
$dscProcessID = Get-WmiObject msft_providers | 
  Where-Object {$_.provider -like 'dsccore'} | 
  Select-Object -ExpandProperty HostProcessIdentifier 

# Kill it
Get-Process -Id $dscProcessID | Stop-Process

Wednesday, June 18, 2014

KevMar_TcpPrinter v1.0.2

I just pushed an update to my TcpPrinter resource that can now handle basic driver installations. You can optionally provide a path to the inf files needed for the installation. The resource will try and use those if the driver does not exist.

There will be some limitations to this approach but I felt it was important to provide this option. For now, make sure the files are on the target system. You may have to use another resource to copy those files to the target.

Another change in this release is that Ensure="Absent" will try and remove the port and driver if they are not in use anymore. Should help to keep the system a little cleaner. I did add the requirement that the DriverInf property should be defined for it to remove a driver. This assumes that you installed the driver with this resource and can easily add it again. I also wanted to avoid removing any of the built in drivers.

Changes in v1.0.2
  Added DriverInf property
    Allows you to provide the location to the driver's inf file to be used during printer installation
  Modified Ensure="Absent"
    Will remove the printer port if the removed printer was the last one using it
    Will remove the driver only if the DriverInf is defined and the removed printer was the last one using it


Monday, June 16, 2014

DSC Resource: KevMar_MapPrinter

Last week I put together a resource for adding TCP printers with Desired State Configuration. After I finished that one, I knew I needed to create another one that loads printers from a print server. They hand in hand even if they have different use cases. I can see using TcpPrinter for a print server and MapPrinter for a terminal server.

This one turned out to be much simpler to put together. I may end up doing a more detailed write up in the future just to show how easy it can be to create a DSC resource. Take a look at how simple the config looks for MapPrinter.

MapPrinter NetworkPrinter
            Path = '\\server\EpsonPrinter'            

It does not get much simpler than that. And removing a printer mapped this way is just as easy.

MapPrinter NetworkPrinter
            Path   =  '\\server\EpsonPrinter'            
            Ensure = "Absent"

This will map this printer for every user on the system. There is no way to target just a user with the method that I used. To correctly remove a printer mapped this way, it is best to use the DSC resource to do so. If you have to remove it by hand, here is the command that does the magic.

$path = "\\server\EpsonPrinter"
rundll32 printui.dll,PrintUIEntry /gd /n$path

It feels like the printer is slow to show up the first time this is ran. Restarting the spooler speeds up the process. I don't think the spooler immediately sees the changes.

The KevMar_MapPrinter was added to my other module that I have up on https://github.com/kmarquette/Powershell

Friday, June 13, 2014

DSC Resources: KevMar_TcpPrinter, KevMar_WindowsUpdate

I put together a set of Desired State Configuration Resources into a module. I was looking for a good project and I did not see any other resources that managed printers or Windows Updates yet.


This is a printer management resource that will also create printer ports as needed. Most of the settings are optional as long as you specify the name, driver, and ip address of the printer.

At the moment, the driver already needs to be installed on the target node. Driver management is one of the next things for me to look at after I do some more testing.


The Windows Updates module is a composite module that wraps together several registry resources. Each of those properties maps to one or more registry keys that relates to automatic updates

I have all the code up on https://github.com/kmarquette/Powershell with some example configurations.

Why can't things just work?

I love to learn new things. I tend to dive in and just run with it. I just put together a new DSC resource for managing printers. Everything felt very solid as I was testing and things were just working. Running the Get,Set,and Test TargetResource commands by hand felt solid. Then I created my first config and pushed it with a Start-DscConfiguration. I ran into one small little issue that turned into a major road block.

How hard can it be to add a printer?

Under the hood, I was just using WMI to add printers and printer ports. I could easily update and delete existing printers. I could also create new printer ports without issue. My major road block was that my code to create a new printer would only fail when ran with Start-DscConfiguration. One little feature that is kind of critical to the whole project.


I was creating a new Win32_Printer object and using the .Put() command to save it. The error message I got was very nondescript. I used a try/catch block to report back the inner message of the exception. Access Denied is all it said.

I tried creating WMI objects different ways but I was not making any progress. I validated all the properties and tested setting more/less of them at once. I took a closer look at the .Put() command and found it had a putOptions overload. I had to track down how to create the right object because it didn't accept the raw int values. I discovered that there was a .psbase.put() command that I was not using before. Trial and error.


I also discovered that Powershell and VBScript use different wrappers for WMI objects. For some reason VBScript uses a .put_() command (yes, that underscore is intentional) to commit changes. I was able to use an existing VBScript in the Windows folder so I didn't have to write my own. It worked well enough with manual testing, but it also failed to work correctly when ran as a DscResource. I had hope that the different wrappers would somehow make a difference. But I was mistaken.


I would have loved to have used New-Printer but I am working with Windows 7. But that gave me an idea to look at the CIM instances. Again, a different wrapper. While the CIM_Printer class is easy to pull data out with, I don't think it was ever intended to be used for adding printers. I tried to create a new object but did not have much luck. As I explored the object, all the properties look to be read only. When trying to create a new one, it complains about the key fields missing. Yet when I tried to work that out, I didn't make any progress.


I figured there had to be some way to add a printer from the command line without using WMI. I tracked down a rundll32 command that I used in the past for other things (Rundll32 printui.dll,PrintUIEntry). I had a good feeling that it would have worked. I didn't like the required parameters as much. Mostly because it wanted more details around the driver than I was using. Something I could change, but it didn't feel as clean as I wanted this part to be. But it did give me another idea.


I started to look at system calls and found the AddPrinter that I was looking for. I required exactly what I wanted it to require. It was almost too good to be true. I was then looking at my next challenge. How do I pInvoke from Powershell? I found a little snip of code where they used C# to do the pInvoke from Powershell. That looked like as good of an approach as any.

C# and Powershell

I had my C# code fleshed out and needed to get it running in Powershell. I want to avoid having an external assembly so I had to figure out how to do it more inline. The Add-Type command turned out to be exactly what I was looking for. After importing my C# code, I was off an running.

It just works

I was able to drop it into my DSC resource with minimal adjustment and it worked perfectly. That last piece fell into place and my DSC resource was working wonderfully. This project started out as one of the most basic resources that I could think of and it ended up taking me on journey across many different technologies before I was done.

The end result is a TCPPrinter DSC resource that I have added to my KevMar modules.

Monday, June 02, 2014

Writing DSC Resources in C#

I wanted to mention this real quick so that I could find it later. The Powershell blog just mentioned that you can write DSC resources in C#.


Sunday, June 01, 2014

Using Desired State Configuration to Set Local Passwords

DSC as a User resource that allows you to create and configure local accounts. While you can set the local account password this way, you have to store the password in plain text to do so. So if you decide to set the administrator password this way, the mof file on the machine will contain that password in plain text. This is a perfect example of just because you can do something, it does not mean that you should.

This turned out to be more complicated than I expected. I was able to find a post by Aman Dhally that dug into the details and this was the result.

$ConfigData = @{
    AllNodes = @( 
             @{ NodeName = "*"; PSDscAllowPlainTextPassword=$true }
             @{ NodeName = "localhost"; }

Configuration LocalPasswordConfig
    $secpassword = ConvertTo-SecureString "Password1" -AsPlainText -Force
    $mycreds = New-Object System.Management.Automation.PSCredential("Administrator",$secpassword)

    Node $AllNodes.NodeName
        User LocalAccount{
            UserName = "Administrator"
            Password = $mycreds

If you don't want to have your password in plain text in your config files, you can pass in a credential object. But the .mof file will still have the plain text password.

Configuration LocalPasswordConfig

    Node $AllNodes.NodeName
        User LocalAccount{
            UserName = "Administrator"
            Password = $mycreds

$cred = Get-Credential
LocalPasswordConfig -mycreds $cred –ConfigurationData $ConfigData 

It may be possible to use a certificate to solve the pain text issue, but I am still trying to get my head wrapped around it. I see what looks like a good example here. See the example script at the bottom of that page.

Change Local Account Password

I wrote a CmdLet the other day to change the local account password. It was a good exercise in working with Powershell, but all the meat of the command boiled down to 2 lines of code.

$admin = [adsi]("WinNT://$ComputerName/$AccountName, user")
$admin.psbase.invoke("SetPassword", $Password)

While I was looking for that command, I remembered this other ways to do the same thing.

net user $account $password

Thursday, May 29, 2014

Change Local Account Password CmdLet

There are lots of ways to change the password on local machine accounts. I used this as a sample project as I was exploring all the features of CmdLets. At first glance, this code is overkill for the task at hand. It is a good example of how to implement -WhatIf, -Confirm, and -Force. I have another post that shows just the 2 lines of code needed to change the local account password.

#AccountManagement is used for Varifying password changes
Add-Type -AssemblyName System.DirectoryServices.AccountManagement

function Set-AccountPassword
   Sets the password for a local machine account
   It will set a password on a remote machine for the specified account with the specified password
   Set-AccountPassword -ComputerName localhost -AccountName Administrator -Password BatteryStapleHorse
    Set-AccountPassword -Password BatteryStapleHorse -SkipVerify -Force
                  HelpUri = 'http://www.microsoft.com/',
        # ComputerName help description
        [string]$ComputerName = "$env:computername",

        # AccountName help description
        $AccountName = 'Administrator',

        # Password help description



        Write-Verbose "Testing connection to $ComputerName before we try and change the password"
        if(Test-Connection $ComputerName -Count 1){
            Write-Verbose "$ComputerName is online"

            #Example of support for -WhatIf
            #Also used with -Confirm and ConfirmImpact options
            if ($pscmdlet.ShouldProcess("$ComputerName\$AccountName", "SetPassword"))
                #Example of support for -Force, this will prompt every time unless the -force param is used
                if($Force -or $pscmdlet.ShouldContinue("Change the password for this account: $ComputerName\$AccountName","Setting Password")){
                    Write-Verbose "Using ADSI for connection to WinNT://$ComputerName/$AccountName"
                    $admin = [adsi]("WinNT://$ComputerName/$AccountName, user")

                    Write-Verbose "Invoking SetPassword on $ComputerName"
                    $admin.psbase.invoke("SetPassword", $Password)
                    # This will verify that the password was changed to $Password
                    # Skip Verify is an optional param
                        Write-Verbose "Verifing that the password changed correctly"
                        $obj = New-Object System.DirectoryServices.AccountManagement.PrincipalContext('machine',"$ComputerName")
                            Write-Verbose "Verified!"
                            Write-Error "Failed to verify password change"
                        Write-Verbose "SkipVerify=True skipping verify check"
                    } #SkipVerify
                } #ShouldContinue
            } #ShouldProcess
        } #Test-Connection
    } #Process

Wednesday, May 28, 2014

Simple DSC Example

One thing I am doing to get more practice with Desired State Configuration is to create configurations for existing servers. I am grabbing the easiest things to script and working out from there. Here one I quickly put together for our terminal server boxes.

$ConfigData = @{
    AllNodes = @(
            NodeName = "*"
            NodeName = "ProdServer1";
            Database = "database";
            DatabaseServer = "DBServer"
            NodeName = "ProdServer2";
            Database = "database"
            DatabaseServer = "DBServer"
            NodeName = "QAServer1";
            Database = "QAdatabase"
            DatabaseServer = "DBServer"

Configuration TerminalServer
   node $allnodes.NodeName {
        WindowsFeature Backup {
            Name = "Windows-Server-Backup";

   node $allnodes.NodeName

       WindowsFeature DesktopExperience {
        Name = "Desktop-Experience";
        IncludeAllSubFeature = "True";

       WindowsFeature RDS {
        Name = "RDS-RD-Server";
        IncludeAllSubFeature = "True";
       Registry ProductdDatabase{
        Key = "HKLM:\SOFTWARE\company\product";
        ValueName = 'InitialCatalog';
        ValueData = $node.Database

       Registry ProductDatabaseServer{
        Key = "HKLM:\SOFTWARE\company\product";
        ValueName = 'DataSource';
        ValueData = $node.DatabaseServer


TerminalServer -ConfigurationData $ConfigData

Start-DscConfiguration -Wait -Verbose -Path .\TerminalServer

Tuesday, May 27, 2014


I recently wrote a script that parses windows events to report on printing by user or by printer. To make that happen, it was important that a special log gets enabled. I was able to create a small script to do that for me.

$EventLog = Get-WinEvent -ListLog Microsoft-Windows-PrintService/Operational 
$EventLog | 
   %{$_.IsEnabled = $true; 
      $_.LogMode = "AutoBackup"

One thing you will notice is that I call SaveChanges() after I see all the values. None of the settings will be saved if you don't do that. It is one of those details that could easily be missed if you were not looking for it.

I polished it up a bit as a CmdLet: Enable-PrintHistory

Wednesday, May 21, 2014

How do I track pages printed?

We had a 3rd party service managing our printers for a while. It didn't work out in the long run but they gave us these nice reports showing us how many pages each printer printed in the previous quarter. I found that information very valuable thought and kind of missed it.

After a little digging, I came up with a way to track that information without having to walk to every printer. To be honest, I had a lot of ideas but I eventually found a Windows event log that that gave me everything I needed. The event log is called Microsoft-Windows-PrintService/Operational. You first need to enable the log but it collects a lot of good details. 

$log = Get-WinEvent -FilterHashTable @{ "LogName"= "Microsoft-Windows-PrintService/Operational";"ID"="307"}

Once we pull all of the events, it they will be easy enough to parse with RegEx. Through this together quite quickly but it gets the job done. I have only tested this message format on Server 2012R2.

 *RegEx excluded from the post because I can't get it to render correctly in blogger without a lot of rework. See script at the end.

Once I parse out the values I need, I package it back into an object. From there you can do whatever you need to do with it.

$log | ?{$_.message -match $MessageRegEx} | 
    %{ New-Object PSObject -property @{"Document"=$Matches.Document;
       "TimeStamp"= $_.TimeCreated;
       "Printer" = $Matches.Printer;
       "PrintHost" = $_.MachineName

Then write that out to a CSV file when you are done. If you take a look at the values that I can parse out of it, I get a lot more information that I expected. Not only can you get page counts per printer, you can track printed page counts back to individual users. I pull this dataset into Excel and transform it into a pivot table for easy reporting.

After I clean this up a bit, here is my resulting CmdLet: Get-PrintHistory