Friday, May 31, 2013

Quick Script: are automatic updates enabled?



#Check for auto updates
    $objAutoUpdate = new-Object -com Microsoft.Update.AutoUpdate
    $objSettings = $objAutoUpdate.Settings

    IF ($objSettings.NotificationLevel -eq 4)
        { write-host '[X] Windows AutoUpdate'}
    else {'[ ] Windows AutoUpdate'}



Monday, May 27, 2013

How to right click sign Powershell and other Scripts

I set up a handy script a while back that allows me to right click a script to sign it. I already had the code signing cert worked out. I just needed an easy way to sign things. Once you have the base scripts in place, its easy to sign .ps1, .vbs, .dll, .exe, and RDP files.

Here is my actual Powershell script that does the heavy lifting:

$cert = gci cert:\currentuser\my -CodeSigningCert | ?{$_.thumbprint -eq "DD46064E89886A185F19FCD64483E35A1898925E" }
Set-AuthenticodeSignature $args[0] $cert -TimestampServer "http://timestamp.verisign.com/scripts/timstamp.dll"
Start-Sleep -s 1

Also have one for VBScript:

Set objSigner = WScript.CreateObject("Scripting.Signer")
objSigner.SignFile WScript.Arguments(0), "Kevin Marquette"

I use those with the fallowing registry keys to enable the right click options:

Windows Registry Editor Version 5.00

[HKEY_CLASSES_ROOT\Microsoft.PowerShellScript.1\Shell\Sign\Command]
@="\"C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe\" \"-file\" \"N:\\bin\\SignScript.ps1\" \"%1\""

[HKEY_CLASSES_ROOT\exefile\shell\Sign\command]
@="\"C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe\" \"-file\" \"N:\\bin\\SignScript.ps1\" \"%1\""

[HKEY_CLASSES_ROOT\dllfile\shell\Sign\Command]
@="\"C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe\" \"-file\" \"N:\\bin\\SignScript.ps1\" \"%1\""

[HKEY_CLASSES_ROOT\VBSFile\Shell\Sign\command]
@="\"c:\\windows\\System32\\CScript.exe\" N:\\bin\\SignScript.vbs \"%1\""

[HKEY_CLASSES_ROOT\RDP.File\shell\Sign\command]
@="rdpsign /sha1 DD46064E89886A185F19FCD64483E35A1898925E \"%1\""

These expect your code signing cert to be added to the local users cert store within Windows. You can run this Powershell command to make sure you have it in the right place:

gci cert:\currentuser\my -CodeSigningCert


Friday, May 24, 2013

Using AppLocker audit mode to track down Malware

We all know that AppLocker can stop a lot of things we don't want running on the computer.  That includes malware. If you are not ready to pull the trigger, audit mode can still be a great asset.

Audit mode tells us about everything that is running on your system. It creates a log entry every time you run a program. That log will tell you if it would have allowed the app to run or if it would have blocked it and why. A log like that can give you a lot of information.

Once you start building rules, it gets even better. Then you can filter on the things that would have been blocked.  If you see something that is legit, then you can create a rule for it.

Things like malware just jump out at you in those logs. A quick script like this will show you where its hiding.


get-winevent -logname "Microsoft-Windows-AppLocker/EXE and DLL" |
Where-Object{$_.id -eq 8003}  |
ft message


Friday, May 17, 2013

When fast updates bite back

You already know that I am fast to apply updates and move to new products. I have almost all our WSUS updates set to auto approve and I load them on our servers and workstations the same day. I have a lot of faith and confidence in the patching process. But every once in a while, they bite back.

I can recall a few years ago that Microsoft got a lot of flack for blue screening computes with an update. I was keeping up to date with the situation from various news feeds. I unapproved the update while I investigated it more. We were not seeing any blue screens but other admins were. I saw all kinds of email flying around warning people and reporting issues.

As real details started to poor in, it turns out that the only computers that were blue screening were the ones with root kit infections. I immediately pushed that patch out to the rest of my computers. If my computers were infected, I wanted to know about it. In the end we had a clean bill of health but not everyone was so lucky.

I had the Bing desktop search bar get deployed once. Turns out I had feature packs on auto approve. I quickly fixed that and recalled it.

Last year Microsoft released a patch that would not trust 512bit certs anymore. I was following the progress of this issue for a while. The Flame malware was using a 512bit cert of Microsoft's that was weak enough to break. MS revoked that cert and later released this patch to break all 512 bit certs. I took a quick peek at our central IT's cert server. While I did see a few of those 512 bit certs, I saw many more 1024 and 2048 bit ones. I figured we had nothing to worry about.

Turns out that our email was using one of the weaker certs. So every one of my users was getting an error message that Outlook did not trust our email server. I got on the phone with central IT and pushed them to get an updated cert rolled out.  They recommended that the rest of the org not install that patch. it turned out that they needed to update a root cert first and that is kind of a delicate process when you don't do it very often. That was not something they were going to put a rush job on. Luckily Microsoft had a KB that talked about this issue and offered a command that would trust 512bit certs again.

I was able to Powershell that command out to everyone and life returned to normal. I was able to revert that setting once the certs were taken care of.

The one update that almost bit us the hardest was the Powershell 3.0 and remote management update that was released around December 2012. We started to run into some strange issues with remote Powershell and SCCM config man. And before we knew it, we realized that we could not remotely Powershell anything. SCCM was also down and out. I started to deep dive into the internals of WinRM to fix this. Listeners were broken and Powershell was refusing to re-register settings it needed for remote management.

Something reminded me that Powershell 3.0 was out and I found it on our workstations. We started finding reports of compatibility issues with SCCM 2012 and Powershell 3.0. Config Manager was attempting to repair WMI but would corrupt it instead. We ended up pulling that patch using WSUS and everything returned to normal in a few days. The SCCM server took a little more work to correct.

Not having Powershell when you need it can be very scary. That is my go to tool to recover from most issues. So handy to for a WSUS check in or gpupdate or ipconfig /flushdns to resolve some issue.

I think patching fast works well in our environment because we have a good team that is flexible and quick to respond to these types of issues. We still get caught off guard from time to time, but we handle it well. 

Monday, May 13, 2013

Lightning Fast Updates

How fast do you deploy updates? If Microsoft released a run of the mill update today, how soon would you see it on your production systems?

I like to patch my systems quickly. Over time, I have gotten quicker and quicker at rolling them out. WSUS was a great addition to Windows Server. Not only do I auto approve the important updates, I also auto approve just about everything else.  I found myself blindly approving them twice a year anyway.  That tends to create a monster patch. The problem with monster patches is that people notice them. The login takes longer so you tell them its the patches. Then if something like a hard drive goes out, then people blame the patches.

I still hold of on service packs, new versions of IE, and feature packs. All for good reason. I have only been caught off guard a few times because of it. So I have everything on auto approve.

One issue that I did have for a while was patches showing up later in the week than I expected. I would be ready for patch Tuesday expecting that things would be patched Wednesday morning and they were not. It felt like most of our patches hit on Thursday instead. So I started to look into it.

Our machines patch at 3:00 am if they are powered on. I tried to get WSUS to update just before 3:00am so things would be ready to go when the computers went to update. Sounded good in theory but that is not how the client updates. I found out that the computer will check in with WSUS once every 24 hours by default. If WSUS was not pulling updates until 3:00 am, then everything was really updating 24 hours behind.

So know I knew why my updates felt a whole day behind. I increased how many times WSUS would pull updates from Microsoft to 3 and let it run for a long time. My WSUS server was checking for updates at 11:00 am, 7:00pm, and 3:00am central. This way I was catching any other updates that showed up at odd times. I would get a few more machines and servers updated a day ahead, but the bulk of them was still 24 hours behind.

There is a very subtle detail here that I overlooked for the longest time. I had no idea what time Microsoft actually released at on Tuesdays. I ran with this schedule for a very long time. Then one day I was really reading an important IE patch that everyone was rushing to load and I saw the expected release time. It was at 10:00 pst. Seeing this time reminded me to check my update schedule.

Sure enough, I had it in my mind that they were released in the evening. I could see my logs showing updates getting pulled at 7:00pm every Tuesday patch day. With 10:00pst being 12:00 central, it clicked with me where my issue was.  I moved that early sync to 1:00pm and everything started updating right on schedule. All my servers and workstations were updating as expected with a very clear schedule.

I also started scheduling wake on lan to power up our workstations and combined it with a check for updates event. So now all my computers are getting updated as fast as reasonably possible and I know exactly when to expect issues.

Tuesday, April 23, 2013

Let's build a Data Warehouse

Our reporting needs have outgrown our existing tools. Actually, that's not true. We have all the right tools but are not using them as well as we could be. It all starts with our data. Right now it all sits in our vendors schema. That works well for the transaction nature of the application, but not so much for reporting.

We have done a lot with what we have. Every night, we take the most recent database backup and load it onto a second server that is used for reporting. I take about a dozen of our core queries and dump them to tables for use the next day. We do the basics like indexes and primary keys. Or issues is that these are designed for specific reports. As the demands and needs of the reports change, we put in a good deal of time reworking the queries.

We started building our reports with Reporting Services and have not expanded our use of the tools that SQL has to offer yet. In the mean time, I have gotten more involved in the SQL community. Attending user groups, SQL Saturdays, and other Microsoft Tech Events. I have been introduced to a lot of features and ideas that I was previously unaware of. I think it's time we built a data warehouse.

I don't think our dataset is large enough for me to truly call what I am going to make a data warehouse. My database sits at 30 some gig in size. I also have a huge maintenance window. The core activity of our business ends by 5:00 pm so I have all night to process whatever I want. So my ETL process can process my entire dataset every time. In the beginning anyway. I'll deal with slowly changing dimensions later.

I want to build a star schema for my data and take advantage of Analysis Services. I want to be able to expose my data to PowerPivot and PowerView. I see a lot of power in these tools and there is no better way to learn than to jump into it. Even if I can't get my user base to use these tools, it will help me parse our data and they will still benefit.

Friday, April 19, 2013

AppLocker Audit Mode Three Months Later

I enabled AppLocker in audit mode about 3 months ago for all of our workstations. I spent about 2 weeks checking the logs and adding rules. I put it on the back burner to take care of some other things and almost forgot about it.  I ran those scripts I posted previously to check up on my workstations and things look fairly clean. Here are a few things that stand out to me.

There are a handful of things that run out of the user's profile and ProgramData that I need to be aware of.  I see a Citrix and WebEx client pop up on a few machines. Spotify also jumps out in the list. I didn't realize how many of our users used that. I also see a few Java updates being ran from the temp internet files folder. Nothing too crazy here that would have impacted much. I expect it would have been a hand full of panic calls from people that could not get some web conferences to work.

I did find a custom app that we wrote sitting on some desktops that would have broke. That would be been a big deal. I think I will just sign those apps and place them in the Program Files folder. I can use these logs to track down these users. This app is just an exe so there is no installer or registry thumbprints to look for.

The last group of findings were just a hand full of special machines that had something installed to a folder on the root of the C: drive. I could guess exactly where these machines were based on the names of those folders. I will handle these case by case. I am tempted to just give them local exceptions instead of baking something into the main policy.

Now that we are aware of these things, we can do things right going forward. Primarily loading everything into the program files would be the most help. I plan on letting this go for another several months and see what else I pick up.