Monthly Archives: April 2011

Domain Controllers and DNS

Earlier I blogged about the Domain Admin account. Thats a user account that access to Domain Controllers which are simply Windows Servers with special features installed. The enterprise admin account is one step up and can make changes that cannot be undone but have to be re applied from scratch.

Usually when people hear DNS they think of the public version that everyone has access to. nslookup is a network utility that will tell you what IPS are behind a name


Non-authoritative answer:


That would be the public response, however if you have an internal network you can have an internal naming system. A computer on your network with a hostname of “SERVER1” could also have HOST Aliases in your local dns server. If you setup a domain called (Myhouse.local) you can add a bunch of alias records such as SRV1.myhouse.local, SQLBOX.myhouse.local, DEVENVIRONMENT.myhouse.local that all point to SERVER1’s IP address. In DNS you can also set reverse lookup record to allow… well a reverse lookup. You can perform a reverse lookup with the -a option in the ping utility:

C:>ping -a

Pinging [] with 32 bytes of data:
Reply from bytes=32 time=31ms TTL=55
Reply from bytes=32 time=63ms TTL=55

You can install both the domain controller and dns roles on a windows server. Domain controllers main feature is authentication. In a domain I can have one account nujakuser@myhome.local that has the ability to log into one more (or no) computers. Without a domain you login with local windows users, COMPUTERNAMEAdministrator or COMPUTERNAMEnugz. With a domain you can login with your local account(COMPUTERNAMEnugz) or login with your domain account (MYHOMEnujakuser). Unless turned off a local computer caches your domain account so in the case of a network outage, the user can still log in to the computer with their domain user.

Active Directory is Microsoft’s name for what stores all this authentication information. AD can store all kinds of user information such as manager, phone number, email address, security group memeberships and mailing list memberships…etc. AD also stores information on any computer that joins the domain. If you are on a domain and browse to \MYHOME.localSysvol you will see the information that AD stores. You can also get this information by \domaincontrollerhostnameSysvol. I highly recommend having redundancy in your domain controllers. The sysvol information is replicated using DFS replication if you have more than one DC. Sometimes this replication can take a while and cause a delay between the time a change is made and when it takes affect for a computer connected to another domain controller.

Group policy is another feature of domains. If you had 100 servers and all the sudden needed to make a firewall change this is where group policy would come in handy. You can setup policies that apply to your domain users and policies that apply to your computers. You can group servers together to receive different policies such as, SQL which opens port 1433 and IIS servers which open port 80 and 443. If you use the Active Directory Users and Computers you can move a computer into a different OU and apply different group policy changes. From ADUC you have a GUI that you can reset passwords, make new computers, disable computers and search for anything in your directory fairly easily.

Group policy can be a cuss word in some organizations. It can should severely limit security. If your network has a group policy, you can be a local administrator on a pc and still not have access to restricted items since GP takes precedence over local security policies. If you need to update group policy, simply run gpupdate /force from the command prompt or reboot your computer. Its good to have an organizational unit (OU) that does not apply GP so you can easily test security problems.

Domains are a bit of overkill for a home network of 5 or less computers. If you need to share files and whatnot you can setup a workgroup that will accomplish most all of what you need.

Leave a comment

Posted by on April 28, 2011 in Uncategorized


a server admin returning from a real weekend

I work or semi work most weekends because of the nature of supporting servers. I may not do much over the weekend but I am on call and browsing emails from my phone. Its does make for a more pleasant Monday morning but it has a drag on my personal life. So often we support people can get stuck in firefighting mode. This is a terribly draining feeling when evening comes around and you haven’t finished anything you intended to do that day.

I have been fortunate to get Good Friday off everywhere I have worked in my IT career. This last Easter weekend was a well enjoyed real weekend. I didn’t check email but once late Friday.

Planning for this three day weekend was key to an easy Monday. Making a list of things I was currently working on Thursday afternoon helped me clear my mind of ongoing projects so I could enjoy my weekend. I closed out of the applications I had open so I wasn’t distracted when I unlocked my computer on Monday.

I was also well aware of issues we have when our company is off and the banks are open. I didn’t do a great job of preventing these issues, however I knew where to look and how to fix these issues.

When I sat down at my desk I had a list of things I was working on but that would wait until I completed this checklist:

1. Alerts – A common problem we have is running out or nearly out of space. We keep our servers fairly low on purpose so we can handle rapid growth on expensive disk when necessary. Server down alerts I try to filter a copy to my inbox so they will come to my phone, I usually get to those when I wake up in the morning
2. Urgent emails or from Familiar people – There are about three people that come to mind where if they send me an email, I better read it right away. For people not on that list, don’t be afraid to flip the urgent switch because I’ll check those first.
3. HTTP request/ sql conn important pages and databases – Automated monitoring isn’t fullproof, write your own you can trust or open connections/ browsers and pop the key systems that need to be online at 8am
4. pop open the calendar to make sure you won’t miss meetings
5. get to the second layer of emails – this is not time for completing any tasks that will take more than 2 minutes, do write down todo items or flag the emails.
6. check your issue tracking software for problems outside of email
7. complete any items from your Thursday list and resume sanity

To get myself out of firefighting mode I have got a plan for a personal dashboard. Integrating with existing monitoring software and even writing some custom portlets to limit the influx of data after a three day weekend. Right now its in bits and pieces in different locations. Every time I move to a new application I incur risk of being distracted. I plan on adding to this list and posting some source for some monitoring tools.

Leave a comment

Posted by on April 27, 2011 in Uncategorized


Comparing MD5 and 3DES encryption with

The reasons for using these two types of encryption are completely different. MD5 is a hashing encryption algorithm. When you use MD5 is there no way back. The main reason for hashing algorithms is for storing passwords. This way you never actually store the users password (“passW0rd”) but you actually store the MD5 hash of the password (YoOu0vDQKek5jEsEBHVM4A==). This becomes useful because the next time the user logs in, all you have to do is compare the originally stored hash with the hash of what the users entered into the password box. Converting “passW0rd” to MD5 will always produce the same hash “YoOu0vDQKek5jEsEBHVM4A==”. There is no decrypting MD5. The way MD5 is broken is by creating precomputed table of hashes for every possible input, a.k.a. rainbow table. The stronger your password is the larger the rainbow table will have to be to break your hash.

On a side note, data warehousing has begun to use encryption hashes but not for security reasons. If you have a very large number or size of columns and need to see if they have changed or not you could check each column and compare it to the data you are about to insert. Or, you could create a hash of the entire record and store that. This makes for really fast compares.

The other method of encryption gives you the ability to decrypt the data. Triple Data Encryption Standard (3DES) uses a key and an initialization vector. With these two pieces of information you can decrypt the data. For example, if you store employee salary data in a table, you would want to use this type of encryption so your dba can’t just read the data in the table. This way you can have the Key stored in one location and the IV stored in another.

I have been in discussions that go very poorly when deciding how to secure, or whether or not something needs to be secured. There usually is a thorn of a person that will quickly discount methods of security and sways the whole groups opinion. MD5 by itself is no longer considered a solid security method. This is a fact, and usually the thorn in the group makes this comment but then either neglects to give an alternative or doesn’t know what the alternative is. One alternative is SHA, but it requires more processing and more storage which in 99.9% of cases is completely fine. The other way the discussion goes is into a paranoia state and decided everything needs ultra security. What I suggest is finding your happy medium. Find the level of security where you can get projects completed and performing well but are also secure.

I have pieced together two examples of encryption using MD5 and 3DES. I am not an encryption expert nor a math expert but that is just my point. Using the .NET framework you don’t have to be an expert to at least make an attempt to keep data secure.

MSDN gives great examples which I have used:

The 3DES example. You’ll notice the Encrypt and Decrypt functions are nearly identical. It’d be best to re-factor but I left them separate for illustration purposes. This program takes ‘test.txt’ and encrypts it to ‘3destest.txt’ and then decrypts that to ‘decryptedtest.txt’.

Imports System.IO
Imports System.Security.Cryptography
Module Module1
    Sub Main()
        Dim key() As Byte = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24}
        Dim iv() As Byte = {8, 7, 6, 5, 4, 3, 2, 1}
        EncryptData("test.txt", "3destest.txt", key, iv)
        DecryptData("3destest.txt", "decryptedtest.txt", key, iv)
    End Sub
    Private Sub EncryptData(ByVal inName As String, ByVal outName As String, ByVal tdesKey() As Byte, ByVal tdesIV() As Byte)
        'Create the file streams to handle the input and output files.
        Dim fin As New FileStream(inName, FileMode.Open, FileAccess.Read)
        Dim fout As New FileStream(outName, FileMode.OpenOrCreate, FileAccess.Write)
        'Create variables to help with read and write.
        Dim bin(100) As Byte 'This is intermediate storage for the encryption.
        Dim rdlen As Long = 0 'This is the total number of bytes written.
        Dim totlen As Long = fin.Length 'This is the total length of the input file.
        Dim len As Integer 'This is the number of bytes to be written at a time.
        Dim tdes As New TripleDESCryptoServiceProvider()
        Dim encStream As New CryptoStream(fout, tdes.CreateEncryptor(tdesKey, tdesIV), CryptoStreamMode.Write)
        'Read from the input file, then encrypt and write to the output file.
        While rdlen < totlen
            len = fin.Read(bin, 0, 100)
            encStream.Write(bin, 0, len)
            rdlen = rdlen + len
            Console.WriteLine("{0} bytes processed", rdlen)
        End While
    End Sub
    Private Sub DecryptData(ByVal inName As String, ByVal outName As String, ByVal tdesKey() As Byte, ByVal tdesIV() As Byte)
        'Create the file streams to handle the input and output files.
        Dim fin As New FileStream(inName, FileMode.Open, FileAccess.Read)
        Dim fout As New FileStream(outName, FileMode.OpenOrCreate, FileAccess.Write)
        'Create variables to help with read and write.
        Dim bin(100) As Byte 'This is intermediate storage for the encryption.
        Dim rdlen As Long = 0 'This is the total number of bytes written.
        Dim totlen As Long = fin.Length 'This is the total length of the input file.
        Dim len As Integer 'This is the number of bytes to be written at a time.
        Dim tdes As New TripleDESCryptoServiceProvider()
        Dim encStream As New CryptoStream(fout, tdes.CreateDecryptor(tdesKey, tdesIV), CryptoStreamMode.Write)
        'Read from the input file, then encrypt and write to the output file.
        While rdlen < totlen
            len = fin.Read(bin, 0, 100)
            encStream.Write(bin, 0, len)
            rdlen = rdlen + len
            Console.WriteLine("{0} bytes processed", rdlen)
        End While
    End Sub
End Module

For the MD5 example I take a console parameter string and encrypt that to the ‘PasswordHash.txt’ file. There is an extra step in here to carefully encode as a string since you could run into problems writing the raw byte array output to a file.

Imports System.Text
Imports System.Security.Cryptography
Module Module1
    Public Password As String = Nothing
    Sub Main()
        If My.Application.CommandLineArgs.Count > 0 Then
            Password = My.Application.CommandLineArgs(0)
            Dim objStreamWriter As System.IO.StreamWriter = System.IO.File.AppendText("PasswordHash.txt")
        End If
    End Sub
    Private Function GenerateHash(ByVal StringToEncrypt As String) As String
        Dim UniObject As New UnicodeEncoding()
        Dim ByteSourceText() As Byte = UniObject.GetBytes(StringToEncrypt)
        Dim Md5 As New MD5CryptoServiceProvider()
        Dim ByteHash() As Byte = Md5.ComputeHash(ByteSourceText)
        Return Convert.ToBase64String(ByteHash)
    End Function
End Module
1 Comment

Posted by on April 22, 2011 in .NET, Network Admin


ftp scripting

Windows ships with the old as dirt “ftp.exe” which is an FTP client. It is a command prompt only program that you can use to transfer files to a computer running an FTP server. It has scripting functionality but is severely lacking when compared to other programs in its class. Some FTP programs I have used are:

ftp.exe – blah, no GUI, no TLS
FileZilla – awesome and open source but no CLI for scripting, also has server capabilities
WS_FTP – Ipswitch’s version which has been around for ages. Client and server, can be scripted and has retry logic available. Free and professional editions.
coreftp – GUI, not sure if you can script this one but it is free
WinSCP – Another freebie with GUI and CLI. Verbose in its logging which can help diagnose connection/firewall issues.;EN-US;812409 even has a downloadable plugin to .NET code so you can write your own. Or, here is a VB.NET WinSCP example

The GUI concept is pretty simple, connect then your computer is on the left and the remote computer is on the right. Drag, drop and your done.

The first step would setup a connection profile. This you can do in the GUI of everything except ftp.exe I listed above. You store the server information in the connection profile. This includes minimally the Hostname and Username and a name to reference the connection profile by. More information can be stored in this profile such as, port, password, starting directory, number of retries, connection type (ftp, ftps, sftp) server and/or client keys and much more.

WS_FTP pro supports re-tries, timeouts, keepalives, key generation/storage, and more features than your average FTP program but there isn’t much sense to pay for that unless you are running a huge FTP operation.

Scripting is fairly simple. You really only need two files, a batch file (.bat) which you line up some commands and a script file that you pass into the FTP client program to do the actual transfer.


USER username pwd
PUT test.txt


cd scriptsftp
ftp -i -s:ftp.scp

The ftp protocol uses port 21 by default and sends passwords in plain text to the server. Windows ftp.exe can only use this protocol. Both of these should be avoided if there is any spec of data that should be secured. The SSH protocol can transfer files securely using keys to encrypt the transport layer. This is refered to as SFTP and any of the clients besides ftp.exe can use the protocol. FTPS also uses keys to encrypt the transport layer. It comes in implicit and explicit types. WS_FTP pro can use this method and WinSCP cannot.

I would recommend WinSCP and SFTP. Eariler I posted instructions on setting up my server which include the easy steps to configure an SFTP server. If you use WinSCP you can catch errors in your batch file by checking the %errorlevel% on the line after you call winscp.exe. Standard ftp.exe returns 0 even if it cannot connect to the ftp server. If there are no files to download on the remote server WinSCP will return 0 and WS_FTP will return 1. If you are expecting files every time your script runs you should check “IF EXISTS” after checking the %errorlevel%.

Download and install WinSCP. Next you would setup a connection profile. After adding the winscp install directory to your PATH environment variable you can change your .bat file to:

cd scriptsftp /script=example.scp

To upload a single file, example.scp would look like this:

option batch abort
option confirm off
put examplefile.txt /home/user/

The password and encryption keys are stored in the connection profile “”. There is a winscp.ini file that contains this information that shows up when you launch the GUI. The passwords are encrypted but I recommended guarding this .ini file because the encryption can be cracked. WinSCP does not recommended you even save the passwords for your connection profiles.

Leave a comment

Posted by on April 22, 2011 in Network Admin


self hosted blog for now

Since I am a server administrator I decided to take on the challenge of hosting my own blog.

Leave a comment

Posted by on April 19, 2011 in Uncategorized


normalizing server documentation

Documentation is a bit of a chore. But like the dishes, things feel a bit better after its done. Also like dishes, the job is never really done.

My team is asked quite frequently to document all kinds of procedures. As a server admin, there are a few key questions that need to be documented.

Maintenance Window – When is it acceptable for the server to be down?
Shutdown/Startup procedure – If not automatic, what steps need to be taken so the application comes online? How do we test the application is working?
Dependencies (part of startup) – If this server is down what applications are affected? Also, what other things need to be operational before this server is working?

Instead of supporting a specific server, we work on teams and view support from an application perspective. This is good but it adds the need to document what servers go to what applications. Alerts, such as SCOM, don’t usually include the application name unless you have a good naming convention.

There is usually a main or go-to leader for an application and the rest of the team is backup. Digging deeper into the key documentation we should find more information specific to the application.

Credentials – Admin and service accounts should be documented. There are plenty of options for encryption software or password wallets. Include links to the softwares administration pages or programs. Also, what authentication method is used for general users.
How to contact support – phone number, website, credentials etc.
Restore steps – where are the backups, restoring onsite, restoring offsite, rebuilding from scratch
Visuals – Diagrams that show what servers users hit and all the pieces in the dependency chain that could break.
How to contact users – consider setting up a mailing list so this can be automatically updated
Server general info – SN#, OS version, app versions, drive space, normal CPU/mem/disk/network usage. You should be able to automate most of this.

There are probably some things I missed and some things that may not be needed. Now that we’ve got all the data we need to work on normalizing it. By normalizing documentation I mean two things. #1 removing redundant documentation and #2 create a template so all documentation is similar.

#1 is like database normalization. Create your metaphorical lookup tables so server admins don’t have to document anything twice. Don’t create a giant spreadsheet because then you will be repeating yourself over and over again. Also, try to pick a medium that is easy to combine things that are already documented. This will help removing the redundant and hidden documentation. Spreadsheets are also a poor idea because they don’t include screenshots and other visuals that well. Choose something like sharepoint or one note. Also, make sure the medium you pick is available offsite. Automating into and out of this documentation might be useful.

#2 Create a template and cleanse the already existing data. Take for instance the maintenance window. We need a general format, my idea is to pick 5 to 6 different maintenance windows and try to fit all of the servers into that. Try to answer the question, “what is the best time to perform maintenace” instead of the question, “when is it possible to perform maintenance.” Honestly, most applications if you just clear it with the user you can perform maintenance anytime. Examples of maintenance windows would be, “non-business hours”, “Sunday or holidays”, “2-4am”, “business hours”, “during mainframe maintenance once a month” and “other”. Try to avoid “other” to conform to the template, but also don’t be naive. There are always valid outliers to any rules.

Like any large project focus your energy on the important parts, the parts that are severely lacking. First pick a medium and then practice your template on a couple of applications so you can fine tune it before you tackle the who list of servers.

Leave a comment

Posted by on April 10, 2011 in Network Admin


BSOD Blue Screen of Death analysis

My previous desktop at home never blue screened. In fact I hadn’t experienced many BSODs since the Win98 era. However this new build has been giving me some grief so I started looking into it.

I never actually saw the BSODs. Turns out Win 7 automatically “recovers” from the BSOD by rebooting. Sure non of the 20 things I had opened were open anymore but its nice to not have read the fairly worthless message and end up rebooting anyway. After you login you get a nice message saying windows recovered from a critical error “Blue Screen”

When a windows system crashes it attempts to create a crash dump file. This includes the information that was stored in memory at the time of the crash. If you do not have the proper page file setup you will not be able to create the dump files. It should be at least equal to the size of your RAM and be on the system drive. Win7 creates Mini dump files that do not take up as much space.

I didn’t think my computer had crashed that many times until I realized I needed to free up some space on my computer. I found 11 minidump files which only add up to 3MB in C:windowsminidump. A larger 438MB file was located in C:windows called MEMORY.DMP. I came across these files using windirstat which is another awesome free tool that maps out the files on your computer and shows their sizes among other things.

I knew blue screens usually happen from drivers and or bad memory. I had run a mem test (memtest86) and it didn’t find any errors. I tried opening one of the .dmp files but there isn’t a default viewer for dump files with Win7. I was pleased to find out there was a free tool BlueScreenView (download link near bottom) that makes it mind numbingly simple to view the important contents of the dump file.

Mine were caused by the driver “atikmdag.sys” which is my video card driver. I went to the AMD website and downloaded the latest version. Its been about a week and no issues yet… hopefully the new code did the trick.

Leave a comment

Posted by on April 7, 2011 in Uncategorized


Quick and Dirty IIS Log Analyzer

There are nearly a million tools for making IIS logs useful. Log Parser 2.2 from Microsoft can combine all kinds of logs and even import them into SQL for further analysis. I’ve read instructions on how to use this command line tool but didn’t think I needed to go down that route for the problem I needed to solve. I was asked for a simple count of requests per day from a particular client IP. Also, there were some strange HTTP errors and I was asked if there was “anything in the logs”.

The application in question uses three separate IIS web servers behind a load balancer. The production environment gets about 300,000 hits or log entries a day. The test environment gets about 1.5 million with the automated scripts that run against it. That kind of traffic makes the logs too big to easily read from notepad.

You can setup IIS 6.0 to log directly to a database with ODBC but there is added overhead in this process. As for logging to a file, there are a few options. For starters, all of our servers are in the same time zone so I chose the Microsoft log format over W3C format which logs in GMT. The other option I had was how I wanted to roll the log. I kept the default of daily and chose the time of midnight. If you go into IIS manager and site properties you can see these log options.

So I needed an application to pickup yesterday’s log from three servers and combine them into one SQL database table. I could then run this with a scheduled task daily.

This kb was helpful in getting started with the W3C format: It showed me how easy a bulk import would be and also had the create table script. I started by copying one of the logs to the sql server and quickly realized my application would have to remove any header lines. The bulk import didn’t work if any data in the columns was outside the define parameters, eg: varchar(12) was actually 20 chars or an int was filled with text.

I ended up with a query that would create a new table for each day (if not exist) and do the bulk import from a file on the sql server’s root of C:. I found out it was important to have the status and winstatus columns be int type so I could easily search on them ( where status > 200 ). Also, the date and time columns had to be date and time type to search for particular ranges in the logs. The rest of the fields weren’t to important to me so they could be varchar(max) for now.

So my application does these steps:

1. create a yymmdd string for yesterday
2. pickup the thre “in(yymmdd).log” files
3. remove any title records
4. combine the files
5. send it to the sql server
6. if success, delete the .logs from iis server
7. create a .sql script file with the dates and file locations

Then the last step was create and execute a batch file with a scheduled task. The batch file runs my app and executes the .sql script.

That very quick and dirty has gotten me by for about 3 months without any change. A cleaner and better example would use SSIS and probably reporting services to let people view their own logs. Down the road I would to import the corresponding event logs and perfmon CPU data.

I feel happy with my decision for now. The alternatives were to wait for a purchase of a central logging solution, find a way to get notepad or notepadd++ to search the text files or tell the users I didn’t have the information they were looking for. Often times as an x-programmer I see a simple problem and just fix it. This can be bad if too many of these hard to support apps pop up but for now I have kept them to a minimum.

Leave a comment

Posted by on April 4, 2011 in Network Admin