Wednesday, November 28, 2012

Idea of annual Windows releases is very interesting

The idea of annual Windows releases is very interesting.  The corporate customer of today gets stuck on a release and never changes.

In the past there have been issues.  But the move from Vista to Windows 7 to Windows 8 is fairly painless.  Except that nobody used Vista, so the move from Windows 7 to Windows 8 was painless as far as application compatibility was concerned.  I have more issues with IE10 than Win8.

I already know what I don't want.

I don't wan't a separate corporate version from a consumer version.  We had that back in 2000.  We end up with home users using the corporate version and the corporate users mixing in the home version.  It confuses everyone when at the end of the day it is still Windows.  I don't want to be stuck waiting for features to show up in the corporate version.

I also don't want to be supporting 10 different installations of Windows.  Now that I am installing x86 and x64 machines that I expect to be in service for about 5 years, I could easily end up with way too many versions to keep track of.  That just kills automation.

I also don't want any more drastic or confusing changes that users will not understand.  If you are going to release yearly, I do not want to retrain my entire user population on Windows basics every year.  Not deploying Windows 8 because a single feature requires user training is annoying  I would have rolled out Windows 8 to 10% of my workstations already if it wasn't for the start menu.

Here is how I see the landscape.

Upgrades would need to be smooth as a Service Pack.  I am an advocate of fresh installs every time.  I have advocated that for a long time.  Windows 8 is the first release where I feel comfortable doing the upgrade and trusting the results.  So they are already on the right track.

But I am still stuck in yesterdays environments with all of these thoughts.  I see companies clinging to XP for no good reason as they rob themselves of all the advances that Windows 7 brought us.  But that decade is over.  Looking forward, the landscape is very different.

The transition to VDI is happening very fast.  This presents us with something very unique. Especially when yearly updates come into play. Depending on your set up, a OS refresh could be almost instant.  Users could leave one day and when they return the next day, they are running Windows Next.

Microsoft's Hyper-V

If you look at how fast Microsoft is changing Hyper-V, we want to the OS to change just as fast.  Microsoft nailed it for virtualizing servers.  They want to move into the VDI game and a yearly OS refresh just fits into that so well.  I can't wait. The more I think about it, it will be the corporate VDI customer adopting Windows Next quickly.

Tick Tock Windows Blue

I just saw an article talking about Microsoft releasing a new OS every year.  I think it is a great idea.  But I already hear the rumble of seperating corprate customers from the pack. This was what we had in the beginning.  Windows 95,98,ME for the home user and NT, 2000 for the business user.  What we ended up with was business users with Windows 98 and Home users with 2000.

I do not wan't to go back to that. I think a Tick Tock release schedule would be much better. The idea is that the Tick releases have major functionality changes and the Tock releases is where its refined. Use the Tock release to appeal to the corprate customer.

Monday, November 05, 2012

SQL backups revisited. Just use Ola Hallengren's Scripts

I made a quick post about sql backups not that long ago. Take it for that its worth, but there is a much better way to deal with back ups.  Ola Hallengren has a set of maintenance scripts that could not be easier to use.  I can't tell you how much time I have spent tweaking and adjusting my scripts in the past.  I knew of his scripts but never took the time to look at them.

All you do is run the script and then add a schedule to the jobs it creates.  The jobs are very clear in what they do.  If you review his site, he even gives a suggested schedule and job order that will fit most people.  Those scripts handle many special cases.  It knows if your database needs log back ups or not. It even takes into account Always On backup priorities.

I don't know why I never looked into them before, but I will use them on every database I administrate now.

Sunday, October 28, 2012

My time is way over budget

I was reviewing my list of projects and I realize that its gotten way too long. I have too much time debt. It may sound strange to call it that. I look at it like I do finances. If you are collecting too much debt, you need to analyze how you are collecting it.  With my checking account, I can easily just pull a list of all my transactions.  It's a little harder when we talk about time.  There is no record being generated automatically.

In order to analyze my time, I need to start tracking it. I read about several ways to do it and I settled on something fairly simple.  I opened up Excel and made a table with these headings.  Day, Time, Minutes, Description, and Category.  Every time I change tasks, I write down the time and a short 2-3 word description.  I try to write down the number of minutes I spent at the same time, but I don't care if I miss a few.  It is easy enough to calculate after the fact.

I am using very broad categories. I want a high level view of where my time is spent. I think I have about 7-9 things that I am tracking but they roll up into 3 large buckets.  Support, Other, and My Projects. At the end of the day, I will make sure to date all the entries to assist in later analysis.

The whole point of me collecting this data is to analyse it. The results have been interesting so far. A strong third of my day is end user support.  This is a measure of overflow from the help desk.  Ideally we have enough support staff to handle support issues. The next third of my day is meetings, reviewing items with the rest of the team, and email.  The last third of my day is me working on my projects.

These results are interesting because I am not getting as much work done as I thought I was. I am busy all the time, but not enough of that it going toward my projects. I initially estimated my project list at 56 weeks. With these new metrics, its more like 3 years worth of stuff.

I am going to keep tracking my time to see if I am able to improve these numbers.

Sunday, October 21, 2012

Windows 8 and Juniper RDP VPN unstability

I have been dealing with an interesting issues with Windows 8.  I loaded the beta way back when it was first released.  Once issue that I had issues with was my VPN connectivity at work.  It was very unstable.  I could get 2-5 minutes of work done at a time before it would drop.  I wrote it off as a beta issue and went on my way.  I didn't need to work from home as much as I was, so it was not that big if a deal.

I was a little disappointed when I installed the RTM and the issue continued.  I could deal with it if I was just checking in on servers.  But if I needed to any real work, it was just too much.  It felt like it was dropping more and more often. 

This weekend I actually needed to work on some things and my connection would only last a few seconds.  So it was time to solve the issue. I had enough.  I didn’t have any quick access to any computers that were not running Windows 8 or Server 2012.  I thought it was a good time to finally enable Hyper-V on my desktop.

I enabled the feature and after the reboot, I started installing Windows 7.  As I waited for the install to run, I was reminded at how much faster Windows 8 installed.

The good news is that it worked.  I was able to connect to my VM to use my VPN.  I did find it interesting though.  I would RDP into my VM, to RDP into my work desktop, to RDP into my servers.

If anyone else is having the same issues I am, here is one solution.  I think the issues I am having are more the way our VPN is deployed. We use a Juniper client that has its own rdp client.  I think if our admins had configured things a little different, I could use a different RDP client.  But this works well enough.

Saturday, October 13, 2012

100 projects and counting

I sat down and listed out all my tasks and projects in a spreadsheet. I wrote down everything that came to me.  All the things that people expect me to do or would like me to do.  I put down things that I should be doing but never get to.  I listed all the things I know I will never do but should be on the list anyway.

I needed to clear all of it out of my head. Get it down someplace so I am not wearing myself out thinking about it.  I just kept going as far as I could go.  In the end, I had over 100 items listed in my spreadsheet. That kind of caught me off guard when I saw that number.  I do this every so often and it usually helps me recharge a bit. But this time it showed me how far behind I really am.

I took a bit of time to put time estimates with each item to get a better picture.  The running total was just over a years worth of work.  Assuming that nothing else came up, I could be caught up in 56 weeks.  

I decided to take a look back at the last few times I recorded all my projects. My lists from 6 months ago and 12 months ago were the only one's I had time estimates on all my items.  When I chart the time estimates  for all 3 time periods (today, 6 months, and 12 months ago), it shows that my list is getting longer.  Its growing much faster that I can clear items off of it.

There is no way I can take care of that list alone.  I't very apparent that I either need a team of my own to tackle these things or I need to start turning people down. But now I have some data to back me up when I bring it up. 

Wednesday, October 03, 2012

I'm sorry Windows 8

When the Windows 8 Customer Preview was released I was waiting for the download link to become active.  I have spent a good deal of time getting to know Windows over the years.  This is just another beta in a long list of Windows betas that I have ran as my primary operating system.  I have to be honest, I struggled with the new UI. 

I am one to figure things out on my own.  That’s exactly why I run beta software hot off the press.  But here I am practically running an IT department at times and I could not find the shutdown button.  Using the mouse felt awkward because I can’t use it like my finger.  Do I seriously have to use the scrollbar?  Why not click and slide?  Once I did find the shutdown button, I could not find the log off button.  It also felt awkward to pull out the charms bar to search the start menu.

I found myself using Powershell to shut down or reboot the computer because I knew the command. It felt silly that I eventually had to google for these simple things.  I eventually made my desktop shortcuts and set up my pinned apps.  Then something amazing happened.  The fact that I was running Windows 8 faded into the background.  Once I stopped using the start screen, I found myself working the exact same way I worked in Windows 7.

The building windows 8 blog did a very nice write up about the design and ideas that inspired the new design.  It was a wonderful read that gave me a lot of insight.  I deleted all but 8 items from the start screen and was content to use it as needed.

When I got my hands on the RTM, I decided to give it another shot.  I took everything I knew and ran with it.  Things felt good at first.  I was checking email, doing social media, and browsing with IE 10.  I found the Metro IE 10 to be an interesting experience.  This worked for a while.

Once I stopped playing with things and started using my system, I kept falling back to the desktop browser.  I tried to stick with IE10 as much as I could.  It is very hard to resist using Chrome though.   So I am basically using 3 browsers.  This is making my experience very fragmented.  I flip into Metro for Twitter, I flip into Metro for Facebook, and I flip into Metro for email.   But I’m getting tired of flipping. I’m done flipping.

I’m sorry Windows 8. I wanted to see the start screen succeed, but I can’t force it.  I may not install a start menu replacement, but Metro is not going to be my main workspace anymore.  I’m going back to the desktop and I’m going back to one browser.

Wednesday, August 15, 2012

Discovering SQL Server: TempDB

TempDB is a very unique database. It is critically important but wiped out every time you restart SQL. SQL server does a lot of important work with the TempDB, but it's all temporary.  It is a scratch file if you will.

TempDB gets its own disks.  The faster the better for both reads and writes.  This file can get a lot of activity and the disk contention it creates will be noticeable.  Some query operations can spill out of ram into the TempDB and it's possible to sort indexes in there as well.

Use more than one TempDB if you have lots of cores in your server.  Lots of people have different ideas on how many files you should use for TempDB.  One rule of thumb is one file per 4 cores.  A few files is OK, but don’t go overboard with it.  The important detail is to manually size the files so they are all the same size.  SQL will use them more evenly when you do that.

Another good tip I picked up from one of my local SQL user groups is to make TempDB your default database instead of master for all users that don't have a more appropriate default.  The idea is that if you forget to change to the right database in management studio, your scripts will run in TempDB instead of Master.  So if you create a bunch of tables in TempDB, no big deal because they will clean themselves up.

Monday, August 13, 2012

Discovering SQL Server: Backups

It is very important to pay special attention to SQL backups.  SQL is not your average server and a little extra care is in order to make sure you are doing it correctly.  The database files have constant activity, so you can’t just ask Windows make a copy.
Here is a quick SQL command to get you started:

                TO DISK '\\server\share\MyDatabase.bak'
                WITH BUFFERCOUNT=35

                TODISK '\\server\share\MyDatabase.bak'
                WITH BUFFERCOUNT=35

This takes a fresh full backup of your database and a tail backup of your log file.  Make sure you are backing up your logs. If this is a production database, you should backup the log frequently.  I used a network path in my example because I want those backups off the server.

Now that you have your backups in a file on another server, use your favorite backup method to back them up.  Every environment is different, but I do full backups nightly and keep 7 most recent backups on the network share.  My transaction logs run every 15 minutes on databases that need backed up more often than daily.

There are several options you can use when running your backups.  I also add COMPRESSION and CHECKSUM along with the BUFFERCOUNT=35 option. The buffer count one is kind of a magic number that speeds up your backups.  It allows the backup process to stream more data from disk into ram as you save it to the network.

Thursday, August 09, 2012

iScsi SendTarget issues with MD3620i and VMM

I have a small Hyper-V cluster with 3 Dell R610s and a MD3620i storage array using 10G iScsi. The event log on the MD3600 unit generates an informational event every 30 minutes.  This makes the log very hard to read and slow to load.

Here is the full event:

Event type: 180C
Description: iSCSI connection terminated unexpectedly
Event specific codes: 0/0/0
Event category: Internal
Component type: iSCSI Initiator
Component location: 
Logged by: RAID Controller Module in slot 0

As I was tracking this down, I started investigating the event log on my host servers.  I found several MSiSCSI 113 warnings every half hour:  

Log Name:      System
Source:        MSiSCSI
Event ID:      113
Task Category: None
iSCSI discovery via SendTargets failed with error code 0x00001068 to target portal * 0003260 Root\ISCSIPRT\0000_0 .

The warning would repeat for every target portal IP address of the MD3600.

After a lot of digging on the internet, I discovered this forum post: SCVMM 2008 R2 - Host Refresh causes Event ID 113 MSiSCSI events on Hyper-V Cluster. VMM will run a refresh on the cluster every 30 min and that refresh generates those errors.  It looks like the iSCSI paths are checked and target discovery is ran.  The solution at the bottom of the thread by bellacotim resolved this issue for me.

The problem is that, even though not all iSCSI HBA instances can actually reach the target in question, the user had set up the Discovery Portal to issue iSCSI "Send Targets" along all possible iSCSI HBAs + the MSFT SW initator.  This is the default behavior if all one does is specify the specific initiator.
To properly configure discovery, do the following (assumes a fresh environment):
  • Open the iSCSI Initiator GUI
  • Select the Discovery Tab
  • Click "Discovery Portal..." button to open the Discovery Target Portal dialog
  • Enter the IP address (optionally TCP Port number) of the target's iSCSI portal
  • Click "Advanced..." button to open the Advanced Settings dialog
  • On the "Local Adapter:" pulldown, select a specific HBA instance you *know* can actually connect to the target.  Hint:  By inspecting the list of IPs for this HBA instance (see 7 below), one can gain this knowledge
  • On the "Initiator IP:" pulldown, select the local address from which this HBA should connect from
  • Click OK to close the Advanced Settings dialog
  • Click OK to save your changes
  • Repeat from (3) for all Initiator - Target combinations
I would perform these steps during a maintenance window. It gives a warning about disconnecting active sessions when removing existing discovery targets. 

The good news from what I can tell is that its only filling up event logs with clutter.  As far as I can tell, it is not causing any performance issues. It also looks like it shows up connecting to the other MD units when using iSCSI.  MD3000i MD3200i MD3220i MD3600i MD3620i

Monday, April 23, 2012

Powershell: Compare system configuration vs baseline

Powershell makes it very easy to take custom baselines and compare configurations to that baseline.  One of the easiest examples is checking for changes to services.  Powershell has simple cmdlets for listing services, saving the results to a file, and comparing for differences.  Here is a quick snip of code that hi-lights what we are doing.

$baseline = get-service
stop-service spooler -force
$current = get-service
Compare-Object $baseline $current -Property Name Status

This will capture a baseline, stop your print spooler, then capture a current list.  Then we compare a few properties.  It will hilight that in one list the service is running and in the other it is not.  If a new service was added or removed, then it would also be indicated.  We can use this simple concept to build a configuration change tracking system.

I want to expand this a little bit into a script that I can run every week to show me the changes on my systems. Sounds easy enough, so lets see what we come up with.

function Compare-Baseline($folder, $id, $command, $properties){

 $Root = "$folder"
 $report = "$root\$id-report.txt"

 #Prep folders
 if((Test-Path($Root)) -eq $false){
     md $Root

 #Prep Baseline
 if((Test-Path("$root\$id-base.xml")) -eq $false  ){
     & $command | Select-Object -Property $properties | Export-Clixml -Path "$root\$id-base.xml"
     "New $id Baseline created" | Out-File $report -Append

 & $command | Select-Object -Property $properties | Export-Clixml -Path "$root\$id-current.xml"

 $base = Import-Clixml $root\$id-base.xml
 $current = Import-Clixml $root\$id-current.xml
 $compare =  Compare-Object $base $current -Property $properties -SyncWindow 100 

  $compare | Out-File $report -Append
 Remove-Item $root\$id-base.xml
 Move-Item $root\$id-current.xml $root\$id-base.xml

 type $report

Compare-Baseline "c:\scratch" "Service" {Get-WMIObject win32_service} ("Name", "Startmode", "state", "pathname")
Now I can run this any time I want to see when the services on this box change.  I decided to use the WMI win32_service because it gives me a few more details, the its the same idea.  I wrote this in a very general way so it would be possible run it on many machines.

I have several ideas for this going forward.  I could easily schedule this and have the results emailed to me.  I may also collect those baselines in a central location.  Taking this a step further, I can have one task that checks AD for servers.  Then runs this once on each server.  This would allow it to discover new servers and provide me a single report.