Monday, June 22, 2015

Quick and Dirty Powershell Modules

So you have built some awesome scripts and turned them into advanced functions. Whats next? It is time to put them into a module. It can be a lot easier than you realize. Lets go step by step and build our first module to hold all of your advanced functions.

# Start in our profile powershell folder
CD ~\Documents\WindowsPowershell

# Create a folder for our module and functions
MD Modules\Other\Functions

# Create a module manifest
$Manifest = @{
    Path        = ".\Modules\Other\Other.psd1"
    RootModule  = ".\Other.psm1" # Module loader
    Author      = "Kevin Marquette"
    Description = "Odds and ends"   
New-ModuleManifest @Manifest -Verbose

# Create our module loader (that loads our advanced functions)
$ModuleLoader = @'
  $moduleRoot = Split-Path -Path $MyInvocation.MyCommand.Path

  Write-Verbose "Importing Functions"
  # Import everything in the functions folder
  "$moduleRoot\Functions\*.ps1" |
      Resolve-Path |
      Where-Object { -not ($_.ProviderPath.Contains(".Tests.")) } |
      ForEach-Object { . $_.ProviderPath ; Write-Verbose $_.ProviderPath}

Set-Content -Value $ModuleLoader -Path .\Modules\Other\Other.psm1

# Now create a single file for each advanced function and place it in the functions folder
# Sample function
$TestFunction = @'
   function Test-Other
        Write-Output "Hello World!!"
Set-Content -Value $TestFunction -Path .\Modules\Other\functions\Test-Other.ps1

# Load it and test it out
Import-Module Other -Verbose -Force

You could easily place all your functions into the Other.psm1 file and everything would still work. But this creates a framework that makes your functions easy to manage. That functions folder can now be a dumping ground for all your advanced functions. If you out grow this module, you can create a new one and just move the functions over.

The module loader is what enables that. Later we will add pester tests and this already accounts for this. This is a pattern that I seen used by other Powershell MVPs. It has greatly simplified my function management.

Friday, June 05, 2015

JoinDomainOrWorkgroup 1323 error unable to update the password

I am using the Win32_ComputerSystem WMI object to join machines to a domain with JoinDomainOrWorkgroup. Then I ran into an issue on XP/2003 where I would get this 1323 return code and it would fail. This looked to work find for server 2012 so I started digging.

From the MSDN documentation, error 1323 means  "error unable to update the password".  My first set of searches implied that the time was out of sync between the servers. Manual join worked fine and after syncing the time to the DC before the join, I had the same issue.


Then I found a comment here that said add the domain to the username.


And then it worked.

Monday, May 25, 2015

Solving the hardest problems with powershell. Where to go next?

Sometimes you run into a problem and you don’t know where to go next. When I run into an issue, it often becomes an obsession for me to solve it. It is easy to say google it, but sometimes you need to be able to look at your problem from different points of view for google to actually help you. I have been wiring Powershell for a long time and this is how I tackle those hard to solve problems.

First use get-help, get-command, show-command, get-member, and Format-List * to try and discover a command and get information about object. The more advanced you get with Powershell, the more you will use these commands. So build that habit. (Run update-help to get the most recent updates).

Then turn to google. Something like “Powershell thing I am trying to do”. There are a lot of good samples and old samples out there. The old samples will still work but you may miss the new way to solve that problem if you are on server 2012/windows 8. This is why those first commands are important. If you want examples, get-help command –examples.

Then search for a command line way to do it. If a command or a tool exists, then that’s the easiest way. Depending on what I find, I may look at other solutions and come back to this one.

If this is a standalone application, figure out how it stores its settings. Registry, text file or database. Knowing this will give you a direction. I’ll mention how to flesh these out later if you are unsure at this point.

Then search for a WMI solution. Powershell and WMI play really well together. Windows is an API based OS and a lot of those APIs are exposed in WMI or CIM. People have been working with WMI for a long time and lots of examples exists. I solve a lot of problems with WMI. One I know the object to look at, get-member and format-list * help me explore it. There is also a show-object script floating around that may also help here.

Then google for how to solve your problem with the registry. The registry really controls a lot of things and I would bet a lot of the Powershell script for system configuration are just setting a value in the registry if it is not using WMI.

From here, I look for a VBScript or C#/.Net solution. If it feels like something an admin should be able to do, I will search harder for VBscript. Odds are that someone has solved this issue before. VBScript is easy to translate into simpler Powershell once you have done it a few times. C# and .Net offer a lot of power but you may be diving into some serious code at this point. You can take this to an extreme and look for Win32 API or system calls (rundll32 type stuff). This is never fun but sometimes that is where the solution is.

Another approach is using sysinternals to figure out how the system does what it is doing. Procmon is great. It watches every file change and registry change that your system does. So fire that up, and make your change. Then start hunting for what the system did when you make that change. If SQL is involved, then SQL profiler is also a must have tool.

Look for a GPO solution. If you can find one, remember that most of group policy is just setting registry keys. If there is an ADM file, you can dig into that for the actual key.

While you are doing your hunting, there are a few complications to be aware of. Is it a per user setting or a per machine setting. User profiles can easily be adjusted by the user, but if you are remote or running with different creds, there are often other roadblocks to deal with.

Also, there is nothing wrong with asking for help either. I just wanted to give you a direction. Even if this is all gibberish today. The more advanced you get at this, the deeper down the rabbit hole you can go. I have used every one of these techniques at some point to solve a problem.

Tuesday, December 02, 2014

Using Pester to validate DSC resources and configurations Part 2

Pester tests are like any other script. They grow and evolve over time. Here are a few more tests that I have testing my DSC resources and configurations that I recently added to my collection. 

Does every resource have a pester test?
This is probably one of the most important tests I have. Every resource should have a test, so why not test for that?

describe "DSCResources located in $PSScriptRoot\DSCResources" {

  foreach($Resource in $ResourceList)
    context $ {

      it "Has a pester test" {

        ($Resource.fullname + "\*.test.ps1") | should exist

If it is a standard resource, does it have the files it needs?
Each DSC resource needs to have two files in it. A *.psm1 file and a *.schema.mof file. I use the *.psm1 file as a quick way to identify standard resources differently than a composite resource. I know I will not ever reach a test condition that would cause once of these to fail, but I left it in place so I could change the logic later.

if(Test-Path ($Resource.fullname + "\$Resource.psm1"))

  it "Has a $Resource.schema.mof" {
    ($Resource.fullname + "\$Resource.schema.mof") | should exist
  it "Has a $Resource.psm1" {
    ($Resource.fullname + "\$Resource.psm1") | should exist

Does it pass Test-xDscSchema and Test-xDscResource tests?
I may as well test for these as part of my pester tests. They already validate a lot of things that are easy to overlook.

it "Passes Test-xDscSchema *.schema.mof" {
  Test-xDscSchema ($Resource.fullname + "\$Resource.schema.mof") | should be true
it "Passes Test-xDscResource" {
  Test-xDscResource $Resource.fullname | should be true

If it is a composite resource, does it have the required files?
A composite resource uses different files than a standard resource. It has a *.psd1 and a *.shema.psm1 that should exists. I don’t have any Test-xDSC functions for the composite resources so I add a few extra checks. I verify that the *.psd1 file references the *.psm1 and that the module does not throw any errors when dot sourcing it.

  it "Has a $Resource.schema.psm1" {
    ($Resource.fullname + "\$Resource.schema.psm1") | should exist
  it "Has a $Resource.psd1" {
    ($Resource.fullname + "\$Resource.psd1") | should exist
  it "Has a psd1 that loads the schema.psm1" {
    ($Resource.fullname + "\$Resource.psd1") | should contain "$Resource.schema.psm1"
  it "dot-sourcing should not throw an error" {
    $path = ($Resource.fullname + "\$Resource.schema.psm1")
    { Invoke-expression (Get-Content $path -raw) } | should not throw

I hope you find this examples useful. If you want to see more, take a look at part 1.

Tuesday, November 25, 2014

Setting HKey_Curent_User with a DSC resource

I built a fun new resource for managing registry settings. “DSC already has a resource for managing the registry” you say? This one sets values to user registry settings for all users.

    KevMar_UserRegistry DisableScreenSaver
        ID        = "DisableScreenSaver"
        Key       = "HKEY_CURRENT_USER\Control Panel\Desktop"
        ValueName = "ScreenSaveActive"
        ValueData = "0"

How cool is that? The built in DSC registry resource can only manage system settings. For servers this is all you really need. But if you have to manage user settings for some reason, forget about it. You need to use my resource to do it.

There are several limitations with my implementation to understand before we dive into how it works.

First is that this setting applies to all existing users and every new user once it is set. So if you remove this setting from future configurations instead of using the Ensure = "Absent" option, new users to the system will continue to get the setting. The good news is that using Ensure = “Absent” does stop this from applying to new users.

Second is that this sets the value only once per user. This kind of breaks the idea of DSC maintaining configuration drift. If this needs to get reapplied, there is a version attribute that must be used and incremented. Each user keeps track of what version of the setting they have applied. Increasing the version signals the user that something has changed and it needs to be set again. This is important if you are changing the ValueData  to something different.

Third these registry settings are only applied at user logon. I am using a method that hooks into the user logon process to apply the registry settings. I do not flag a reboot to DSC. I considered it but if you are starting to manage user settings, there can be a huge number of these in your configurations. Requiring a reboot for each one feels like a bit much. In my use case, I did not want the reboot. This is why making it as Absent can stop if from applying to any more users.

I’ll do a write up about how I did this in a future post. I used a Windows feature that is not very well known to most systems admins. I have it posted over at if you want to check it out.

Monday, November 24, 2014

Use Show-Command for a quick PowerShell GUI

We all love PowerShell and know how awesome it is. But not everyone we work with is as willing to drop to the shell as we are. The good news is there is a very easy way to give them the GUI they think they need. Write your advanced function like you already do and have them run it with Show-Command.

Show-Command Set-Service
This beautiful little command pops up and asks them for the information needed to run the command. Give it a try with Show-Command Set-Service

Not only are the required values marked, you even get a drop down box for some options. This works if you target *.ps1 files too.

As cool as this is, Show-Command gets better. Run it on its own and it gives you a list of all commands on the system. The filter makes it very easy to find what you are looking for. 

Thursday, November 20, 2014

$Using: scope in DSC script resources

If yo have spent any time working with DSc resources, you have found yourself needing to use the Script resource for some things.

Script Script_Resource
    GetScript = {}
    SetScript = {}
    TestScript = {}
    DependsOn = ""
    Credential = ""

It is easy enough to use. Define your Get, Set, and Test PowerShell commands and you are all set. Everything is good until you need to work with passing in variables into your script block. You will quickly find that it does not work like you would expect.

This command will always return false:

$path = "c:\windows\temp"
Script Script_Resource
    TestScript = {
        Test-Path $path

Because $path is not defined within the scope of the TestScript to a value, the Test-Path will return false. Take a look at the mof file and you can see this. 

instance of MSFT_ScriptResource as $MSFT_ScriptResource1ref
 ResourceID = "[Script]Script_Resource";
 TestScript = "\n Test-Path '$path'\n ";
 SourceInfo = "::7::9::Script";
 ModuleName = "PSDesiredStateConfiguration";
 ModuleVersion = "1.0";


I have found two ways to deal with this issue. If you think about it, the TestScript is just a string that gets ran on the target node. If you look at the resource, TestScript is defined as a String.

$path = "c:\windows\temp"
Script Script_Resource
    TestScript ="Test-Path '$path'"

This works really well when the command is a very simple one line script. Take a look at the mof file now.

instance of MSFT_ScriptResource as $MSFT_ScriptResource1ref
 ResourceID = "[Script]Script_Resource";
 TestScript = "Test-Path 'c:\\windows\\temp'";
 SourceInfo = "::6::9::Script";
 ModuleName = "PSDesiredStateConfiguration";
 ModuleVersion = "1.0";


This could end up very messy if the script block gets more complicated. What if you have variables that you want to define in the script and use some from the parent scope. You end up escaping things with that horrible back tick. But there is a better way.

This is where the $using: scope comes to the rescue. As far as I can tell, this is undocumented for use in script resources. But using it in Invoke-Command script blocks will allow you to reference variables in the parent scope. It works for our script resource too.

$path = "c:\windows\temp"
Script Script_Resource
    TestScript = {
        Test-Path $using:path

Now when we dive into the mof file, we can see just how this magic works. Our $path gets inserted into the script as part of the script. 

instance of MSFT_ScriptResource as $MSFT_ScriptResource1ref
 ResourceID = "[Script]Script_Resource";
 TestScript = "$path='c:\\windows\\temp'\n Test-Path $path\n ";
 SourceInfo = "::7::9::Script";
 ModuleName = "PSDesiredStateConfiguration";
 ModuleVersion = "1.0";

The $using: scope is something I often overlook but this will be a very handy way to use it.

One final note about my examples. I did trim them down to minimize the code. If you want to recreate my tests, you will need to have the SetScript and GetScript properties defined for each script block.