Our reporting needs have outgrown our existing tools. Actually, that's not true. We have all the right tools but are not using them as well as we could be. It all starts with our data. Right now it all sits in our vendors schema. That works well for the transaction nature of the application, but not so much for reporting.
We have done a lot with what we have. Every night, we take the most recent database backup and load it onto a second server that is used for reporting. I take about a dozen of our core queries and dump them to tables for use the next day. We do the basics like indexes and primary keys. Or issues is that these are designed for specific reports. As the demands and needs of the reports change, we put in a good deal of time reworking the queries.
We started building our reports with Reporting Services and have not expanded our use of the tools that SQL has to offer yet. In the mean time, I have gotten more involved in the SQL community. Attending user groups, SQL Saturdays, and other Microsoft Tech Events. I have been introduced to a lot of features and ideas that I was previously unaware of. I think it's time we built a data warehouse.
I don't think our dataset is large enough for me to truly call what I am going to make a data warehouse. My database sits at 30 some gig in size. I also have a huge maintenance window. The core activity of our business ends by 5:00 pm so I have all night to process whatever I want. So my ETL process can process my entire dataset every time. In the beginning anyway. I'll deal with slowly changing dimensions later.
I want to build a star schema for my data and take advantage of Analysis Services. I want to be able to expose my data to PowerPivot and PowerView. I see a lot of power in these tools and there is no better way to learn than to jump into it. Even if I can't get my user base to use these tools, it will help me parse our data and they will still benefit.
Some problems you just can't search on. Here are some I wish were more searchable and this blog is my attempt to make that happen.
Tuesday, April 23, 2013
Friday, April 19, 2013
AppLocker Audit Mode Three Months Later
I enabled AppLocker in audit mode about 3 months ago for all of our workstations. I spent about 2 weeks checking the logs and adding rules. I put it on the back burner to take care of some other things and almost forgot about it. I ran those scripts I posted previously to check up on my workstations and things look fairly clean. Here are a few things that stand out to me.
There are a handful of things that run out of the user's profile and ProgramData that I need to be aware of. I see a Citrix and WebEx client pop up on a few machines. Spotify also jumps out in the list. I didn't realize how many of our users used that. I also see a few Java updates being ran from the temp internet files folder. Nothing too crazy here that would have impacted much. I expect it would have been a hand full of panic calls from people that could not get some web conferences to work.
I did find a custom app that we wrote sitting on some desktops that would have broke. That would be been a big deal. I think I will just sign those apps and place them in the Program Files folder. I can use these logs to track down these users. This app is just an exe so there is no installer or registry thumbprints to look for.
The last group of findings were just a hand full of special machines that had something installed to a folder on the root of the C: drive. I could guess exactly where these machines were based on the names of those folders. I will handle these case by case. I am tempted to just give them local exceptions instead of baking something into the main policy.
Now that we are aware of these things, we can do things right going forward. Primarily loading everything into the program files would be the most help. I plan on letting this go for another several months and see what else I pick up.
There are a handful of things that run out of the user's profile and ProgramData that I need to be aware of. I see a Citrix and WebEx client pop up on a few machines. Spotify also jumps out in the list. I didn't realize how many of our users used that. I also see a few Java updates being ran from the temp internet files folder. Nothing too crazy here that would have impacted much. I expect it would have been a hand full of panic calls from people that could not get some web conferences to work.
I did find a custom app that we wrote sitting on some desktops that would have broke. That would be been a big deal. I think I will just sign those apps and place them in the Program Files folder. I can use these logs to track down these users. This app is just an exe so there is no installer or registry thumbprints to look for.
The last group of findings were just a hand full of special machines that had something installed to a folder on the root of the C: drive. I could guess exactly where these machines were based on the names of those folders. I will handle these case by case. I am tempted to just give them local exceptions instead of baking something into the main policy.
Now that we are aware of these things, we can do things right going forward. Primarily loading everything into the program files would be the most help. I plan on letting this go for another several months and see what else I pick up.
Tuesday, January 15, 2013
Review AppLocker Logs with PowerShell Remoting
I ran AppLocker in audit mode for a few days on a small
number of computers. So all that
activity is collecting in the "Microsoft-Windows-AppLocker/EXE and
DLL" audit log. It creates an event
every time an application starts indicating if it was allowed, blocked, or
would have been blocked. That last event
type is 8003 and that’s the one I care about.
The Powershell command to view this log entry is this:
get-winevent -logname
"Microsoft-Windows-AppLocker/EXE and DLL"
|
Where-Object{$_.id -eq 8003} |
ft message
This will tell me every application that would have
failed. I can either make a new rule or
ignore it knowing that it would be blocked in the future. I can combine this with powershell remoting
to check the event log on every computer I manage.
Get-QADComputer | %{Invoke-Command $_.Name –AsJob –ScriptBlock{
$ErrorActionPreference
= "SilentlyContinue"
get-winevent
-logname "Microsoft-Windows-AppLocker/EXE
and DLL" |
?{$_.id -eq 8003} |
Format-Table
message
}}
Get-Job | ?{$_.State -eq "Failed" -or
$_.HasMoreData
-eq $false}
| Remove-Job
Get-Job | Receive-Job -Keep
(Get-Job | ?{ $_.HasMoreData -eq
$true})[0] | Receive-Job
If you have the admin share open to administrators, you can
open explorer to \\computername\c$ and find files on it. You can also use that remote admin share in
the wizard to add new rules.
I saw Google Chrome show up on a computer in a user’s
profile on a remote computer. I was able
to point the AppLocker rule wizard to \\computername\c$\users\john\appdata\....
and it added the needed rules. I was
able to add 4-5 needed applications. I
also saw some spyware on a few computers that I was able to clean up.
Now that we added some new rules, I wanted to clear the logs
so they are cleaner next time. Here is
the command to do that.
Wevtutil.exe cl
"Microsoft-Windows-AppLocker/EXE and DLL"
Getting Started with AppLocker
I am only running this in audit mode and I am already
finding benefits of using it. AppLocker
allows you to white list applications.
If you were to use this on workstations that did not grant administrator
access, you could probably stop all malware without any other protection. It turns out to be a lot easier than I
thought.
The idea of white listing every application felt like a
daunting task. There are a set of rules
you can use to make this easier. Running the default rules in audit mode can
give you a good idea of how much work it will take. If you use a consistent
image for every workstation deployment and install everything in Program Files,
then this gets very easy.
First we needs to enable the Application Identity Service. I
enabled it in the same policy that I plan on configuring the rules in.
This should start on the next reboot. The next step is to
configure auditing mode.
Now we need to create some rules. Right click on Executable Rules and create
default rules. This will create 3 important rules for you to prevent you from
locking users of the computers. The first
is the Administrator rule allowing admins the ability to run anything. The other two cover the Windows and Program
Files folder. Any file in those
locations are allowed to run.
If your users are not local administrator on the workstations,
then the only things that can be in those folders were programs installed as an
administrator. This is a very important point that highlights why this works so
well. The only rules you need to add are ones for non-standard programs that don’t
run from Program Files. Hopefully this is a short list.
There are three types of rules you will deal with. Path rules, publisher rules, and checksum
rules. The built in wizard does most of the work for this. Just point it at your installed application
and it will do the rest. You have the
option to make adjustments by hand if needed.
Now apply this policy to computers in Active Directory. Give
your computer plenty of time to get a reboot and a few days of activity.
Wednesday, November 28, 2012
Idea of annual Windows releases is very interesting
The idea of annual Windows releases is very interesting. The corporate customer of today gets stuck on a release and never changes.
In the past there have been issues. But the move from Vista to Windows 7 to Windows 8 is fairly painless. Except that nobody used Vista, so the move from Windows 7 to Windows 8 was painless as far as application compatibility was concerned. I have more issues with IE10 than Win8.
I already know what I don't want.
I don't wan't a separate corporate version from a consumer version. We had that back in 2000. We end up with home users using the corporate version and the corporate users mixing in the home version. It confuses everyone when at the end of the day it is still Windows. I don't want to be stuck waiting for features to show up in the corporate version.
I also don't want to be supporting 10 different installations of Windows. Now that I am installing x86 and x64 machines that I expect to be in service for about 5 years, I could easily end up with way too many versions to keep track of. That just kills automation.
I also don't want any more drastic or confusing changes that users will not understand. If you are going to release yearly, I do not want to retrain my entire user population on Windows basics every year. Not deploying Windows 8 because a single feature requires user training is annoying I would have rolled out Windows 8 to 10% of my workstations already if it wasn't for the start menu.
Here is how I see the landscape.
Upgrades would need to be smooth as a Service Pack. I am an advocate of fresh installs every time. I have advocated that for a long time. Windows 8 is the first release where I feel comfortable doing the upgrade and trusting the results. So they are already on the right track.
But I am still stuck in yesterdays environments with all of these thoughts. I see companies clinging to XP for no good reason as they rob themselves of all the advances that Windows 7 brought us. But that decade is over. Looking forward, the landscape is very different.
The transition to VDI is happening very fast. This presents us with something very unique. Especially when yearly updates come into play. Depending on your set up, a OS refresh could be almost instant. Users could leave one day and when they return the next day, they are running Windows Next.
Microsoft's Hyper-V
If you look at how fast Microsoft is changing Hyper-V, we want to the OS to change just as fast. Microsoft nailed it for virtualizing servers. They want to move into the VDI game and a yearly OS refresh just fits into that so well. I can't wait. The more I think about it, it will be the corporate VDI customer adopting Windows Next quickly.
Tick Tock Windows Blue
I just saw an article talking about Microsoft releasing a new OS every year. I think it is a great idea. But I already hear the rumble of seperating corprate customers from the pack. This was what we had in the beginning. Windows 95,98,ME for the home user and NT, 2000 for the business user. What we ended up with was business users with Windows 98 and Home users with 2000.
I do not wan't to go back to that. I think a Tick Tock release schedule would be much better. The idea is that the Tick releases have major functionality changes and the Tock releases is where its refined. Use the Tock release to appeal to the corprate customer.
Monday, November 05, 2012
SQL backups revisited. Just use Ola Hallengren's Scripts
I made a quick post about sql backups not that long ago. Take it for that its worth, but there is a much better way to deal with back ups. Ola Hallengren has a set of maintenance scripts that could not be easier to use. I can't tell you how much time I have spent tweaking and adjusting my scripts in the past. I knew of his scripts but never took the time to look at them.
All you do is run the script and then add a schedule to the jobs it creates. The jobs are very clear in what they do. If you review his site, he even gives a suggested schedule and job order that will fit most people. Those scripts handle many special cases. It knows if your database needs log back ups or not. It even takes into account Always On backup priorities.
I don't know why I never looked into them before, but I will use them on every database I administrate now.
All you do is run the script and then add a schedule to the jobs it creates. The jobs are very clear in what they do. If you review his site, he even gives a suggested schedule and job order that will fit most people. Those scripts handle many special cases. It knows if your database needs log back ups or not. It even takes into account Always On backup priorities.
I don't know why I never looked into them before, but I will use them on every database I administrate now.
Subscribe to:
Posts (Atom)