Monthly Archives: August 2010

SQL questions

SQL is a way to turn raw data into knowledge. SQL is a language in which you can ask a database a question.  If everything goes well, the engine responds with useful information.  Consider this question more technically known as a query:

select * from PendingOrders

Piece of cake.  That query translates to the question, “What are all my currently pending orders?”  PendingOrders is the table name and “*” is the columns in that table you want to see, * means give me everything. So you ask a simple question and the database spews back 700 records in less than a second.  You then realize this isn’t quite knowledge yet its just a subset of raw data.

select OrderID, CustomerID, SubmitDate from PendingOrders where SubmitDate > 20100101

OrderID CustomerID SubmitDate
000212 000201 20100120
000112 000201 20100211
000092 000219 20100320

You now get a results grid like this so you quickly flip over to the Customer table to id 201 and see Stewbeef is CustomerID 201.  You flip back to your other query data then flip over to the Orders table and find Stewbeef did actually order the silverware he is complaining about it just never showed up on the website because of some bug.  To this I say there are no stupid questions, only stupid people. You could have done a join and seen all these results in one grid.

It appears your data is somewhat normalized, basically meaning you have more than 1 table that holds everything.  Normalization is required for modern systems to function properly and for the data to maintain some kind of integrity.  Unfortunately is makes things harder for everyone developers, dbas and ad-hoc users.

select customer.lastname, orders.orderinfo, PendingOrders.submitdate
from customers
inner join orders on customer.customerID=orders.customerID
inner join PendingOrders.submitdate on PendingOrders.OrderID=orders.OrderID
where PendingOrders.submitdate > 20100101 and
where customer.lastname like '%stewbeef%'

SQL is really easy until you get to joins.  In addition to joins there are many mathematical and date functions that will help you get your answers in a better form than raw data.

Leave a comment

Posted by on August 22, 2010 in SQL Dev


express VB.NET my.fav

Scripting, programming, developing… unfortunately these three things tend to blur together in practice but they are very distinct.  Scripting is the most basic form of automation and you should not confuse someone who can write a script with a programmer.  Programmers have a leg up and the advantage of an environment to compile their code and perform complicated tasks with fewer lines.  Developers take advantage of the hard work of others and make something spectacular(maybe).

Deciding what to code with…

$$$ – VB.NET/SQL express editions are free.  You can write an amazing piece of software with these items.  Where the costs come in is when you outgrow the express editions and need to purchase windows server or meet one of these criteria:

4gb+ database
windows service templates
easier implementation of SDKs

I have even recently heard about a linux tool called MONO where you can run your .NET code without windows.  It may be difficult to support but the point is you can do this for free.

So take a break and go download the express editions and take control of your computer(s).

When you make a new Visual Basic Express project you have a handful of options.  I have experience with console applications and windows form applications (a.k.a. GUI).  In the matter of about 10 seconds you can load the IDE pick new console application press these keys“world.txt”, “hello!”) [f5]

and there you have it your hello world app.  Have you ever heard this quote, “Notepad is the best for coding.”  If you have done the example above you’ll now be able to look down you nose at the scripter who quoted that line.  You can do this because you know that the development environment made you more efficient by setting up your module, directories and giving you IntelliSense to finish each class in the example which is really:

Module Module1
    Sub Main()"world.txt", "hello!")
    End Sub
End Module

[f5] (to compile and run)

With a little practice you can actually do this .NET example in less than 10 seconds.  The Microsoft IDE is much faster than its IBM or open source counterparts.

VB.NET is not much different than C#.  Many of the benefits of VB carry over to C#. They have the same IntelliSense into the framework.  In fact they are so similar there are many free webapps that will convert the code back and forth automagically. I like VB.NET better because the syntax just makes sense.  It is a little more verbose which makes it more human readable. With the power of auto-complete you end up hitting closer to the same number of keystrokes.

In the latest 2010 version if you highlight a variable or other piece of text all other matches will be highlighted showing you if you change that value you will be changing these other pieces of code.  The [f5] part of the example actually compiles you .exe and runs it in debug mode.  Move the cursor down to the write line and press [f9] to set a break point.  The next time you compile in debug mode the program will stop at that line and you can examine your variable values.  At this point you can view “debug windows” such as the call stack or watch windows.

This debug mode is the only way to write code.  Stepping line by line using f10 and stepping into any subroutines and functions using f11 will help you remove errors, hence the name “debug”. If you are not using debug mode you are a scripter and nothing more.

Good code handles errors that may arise.  The last resort to handling errors you did not expect is the try/catch.  In the IDE typing “try” and pressing enter will give you the basic syntax.  If your statements inside the try work the statements inside the catch are never processed. All good programs have some form of exception logging such as:

System.IO.File.AppendAllText("error.log", ex.ToString system.environment.newline)

Especially with silent running console apps you need this error handling.  Let the user know the program failed and then give the developer meaningful information to help resolve the issue.

The best way to describe why I like vb the best is to post some code examples.  Once I figure out a way to easily maintain the syntax highlighting and insert some screenshots I will have a better VB.NET post for you.

1 Comment

Posted by on August 12, 2010 in .NET


No more mallocs, pointers or linked lists

I started with HTML which isn’t really a language but it was a definite milestone.  Then came irc scripting and on to c++ and some vb.  In college I started .net and dabbled in some php.  This is just a summary but the point is I progressively worked my way up and into the modern languages until my first career.  For my first 2+ years I programmed desktop software written in c.  It was a very mature and complex program.  I threw me off my game quite a bit.  The three previous years I had almost exclusively written web applications that used some form of database software.  I was forced back in time to a land of custom “frameworks” and what I would refer to as “hard core programming.”

A pointer… just think about it.  Picture something really cool that you want.  Now picture your friend starring at it.  If you didn’t know where that really cool thing was you could look and your friend and he would tell you where it was.  So your friend is pretty much useless by himself but as long as he is starring at what you want you’ll keep him around.  But if your friend dazes off and looses track of the object of your desiring you may as well consider him useless… or NULL.  NULL pointers are dangerous.

A malloc is a simple subroutine that was mostly phased out by the “new” operator in later languages.  The idea was you could reserve a block of memory to store whatever you wanted into it. The malloc() function takes a parameter for the size you would like to reserve and returns a pointer to the memory location.  Avoid memory leaks at all costs. It is always important to keep good track of the memory you are using.  If you are not using it, free it. The problem occurs when you need to reuse a variable but you actually just create room for a new one.  The old variables pile up until the system runs out of memory.

I recall linked lists were a tough concept for me to grasp.  It wasn’t because its a difficult concept, I just couldn’t understand why you wouldn’t use a database to store your records.  Maybe enforce some referential integrity? No, not in the C era, unless you do it manually.  For those of you who know databases but are curious about linked lists picture a results set in SQL, 100 records and 10 fields.  They way you would store and scan this in C is sequentially.  The first record would have your root pointer at the beginning.  Each record would have a pointer to the next record.  And the very last record’s next pointer would be NULL.  If you need to find all records who’s last name = smith you could do something like this:

while (rec != NULL; rec = rec->next)
if (rec->lastname == smith)
You can imagine how fun things got when you had to insert/update/delete records.  The ACID properties were pretty much out the window.
Leave a comment

Posted by on August 8, 2010 in Uncategorized


restore testing

No good app goes untested.  So why would so many backups go untested?

Fact is, backups are low on the priority chart.  That is the way it should be.  If last night’s backup failed once, no big deal it will take care of itself tonight.  If its been failing for a week… I’ll get to it when I get a chance.

If you have been in the business for a while you either stopped reading in disgust or have continued on so you can post the most damming comment when you’re done.  I can hear the users now freaking out that this is how I feel about backups.  For those people let me explain why.

Some buzz phrases that have been in IT for quite some time are “high availability”, “disaster recovery”, “5 9’s” and “redundancy”.  Hardware and software vendors alike absolutely LOVE these phrases.  They may as well translate into the word “gravy”.  As you can imagine vendors make a killing every time a company hops on the HA bandwagon.  So let me briefly describe these buzzwords:

5 9’s : In IT it means your system is available to users 99.999% of the time.

redundancy: For full redundancy, every component in a system must have a duplicate that can take over in the event of failure.

Disaster Recovery: If the entire location were lost a second location could be brought online.

High Availability: Usually defined in an SLA with Restore Point Objective (RPO)/Restore Time Objective (RTO), HA is a culmination of the three previous buzzwords.

I like my “theoretical” examples so lets set one up.  A new piece of accounting software has been POC’d and your company is ready to dive right in. IT is usually brought into the discussion and it is way too late by now.  At this point us IT’ers generally have no say or ability to review the software we just need to stand up the environment as fast as possible.  Any environment should come with an SLA.  Now would be a good time to do some mock ups of the environment so you can accurately predict your RPO/RTO.  Remember your accounting department is rather $$$ savvy so this is where they might weight in the cost in relation to their wants.  Their wants usually include that the system never go down.  Explain that this is not possible and most systems have a worst case scenario of RPO/RTO 24/2.  This means you take nightly backups of the data (24 possible hours of data loss) and can have the environment back up in 2 hours.

IT in 1995 DR mode might suggest the system that simply backs up the important files nightly to tape.  If they’re smart they have sent their tapes offsite.  In the event of a fire a restore would go something like this:

1. Buy new hardware
2. Setup hardware
3. buy new os (yup the cd is melted)
4. install drivers
5. download app software
6. setup app software
7. setup database software
8. restore files from tape
9. modify security settings
10. modify dns setting to point to new location
11. notify users environment is up

Ok, so you met your 24 hours of data loss but how are you doing on time? If it was just one server and the application software was easy to install you are looking at about 72 hours of downtime. 1995 mode no longer works for todays environments with 50+ servers.  With that many servers you would be looking at a month of downtime and most likely not having all the data you need to meet your 24 hours of data loss.

Fast forward to 2010, enter super sweet HA technologies ($$$$$$$$).

Consider the pieces of a fairly simple but modern server farm. Lets call this Proposal A.
Network Load Balancer
Domain Controller1
Domain Controller2
Web Server1
Web Server2
Web Server3
Database Server Principal
Database Server Mirror
Database Server Witness
Storage Area Network

A user request comes in hits the NLB then one of the web servers, user is authenticated with domain controller, database call made to principal, data pulled off the SAN and response is sent back to the user. The NLB and SAN have built in redundancies so your nearly covered. Now, times every cost by 2 for your 1:1 DR site.

Proposal B from that server admin stuck in 1995 might include just one server that handles all of this. Fortunately there will be two hard drives and two power supplies built into this server.

So after that lengthy setup I now can make a decent argument. Factor in variables like hardware life cycle and maintenance costs and proposal A is so ridiculously expensive it would take a mad man to chose. At what point did we careen down the path of insanity and throw out all cost to benefit analyses. We have fallen into a trap that is very hard to get out of. Who in your organization will have the courage to pick proposal B? How about if it not even a mission critical application? Chances are if the cost won’t put your company under no IT manager will pull the trigger on B.

I say go for it. The odds are with you. Chances are you won’t have any problems with B in the 4-5 year lifespan of the hardware. Just work harder to script everything. Script the windows patches. Script the entire environment restore. Do this to minimize the downtime and test your restores. Buy that second server for a cold site in case the unthinkable happens. Have your tapes over there ready for remotely scripted restores. Don’t buy into the fear tactics of the hardware vendors that even the government is helping propagate now. All you need is a good plan.

Leave a comment

Posted by on August 7, 2010 in Network Admin, SQL Admin


the .bat framework

As I look back, I can’t believe how far I made it in IT before I saved my first .bat file.  Through multiple programming classes and over two years of enterprise level programming before I saved that simple text file as a .bat file.  The mindset when writing scripts is usually very sequential and single threaded in nature. In that regard, I hope dodging the .bat for as long as I did has help me become a better programmer. My first was about 1 year ago, and since then I have created hundreds of .bats, mostly for file transfers.

Batch scripting is automation in its simplest form. Say every night before you go home you save the most recent copy of all of your spreadsheets to a backup drive.  Right-click copy/past done.  Why on earth would you want to automate that?  Because anything that can go wrong will.  The night you forgot to copy will be the same day you were slammed with all kinds of work, the same day you got tons of work done on those spreadsheets and the same day that your hard drive fails.  The next morning you realize, that for some reason 2 months ago you decided to drag and drop to your spreadsheets instead of copy/pasting and windows decided you just wanted shortcuts and not the actual file.  So now that you are scared lets make a .bat file to copy all of your spreadsheets.

create a new text file and put this into it

copy /y "c:workspreadsheets*.xls*" "d:thumbdrive"

now save the file as scripto.bat.  Then you might choose to setup a scheduled task at 6pm to run scripto.bat.  And there you go, piece of cake. Painfully simple automation right at my fingertips for so long and I never took advantage.


Scroll any direction to continue.

up and you are off and running with your own .bat ideas or down because you are interested where I will take you.

So now that you have your spreadsheets in two locations you are protected against disk/computer failure. Lets cover some of the basics then kick it up a notch.

cd "test"
copy a.txt b.txt
if %1 GTR 3 goto donotdel
del b.txt
del a.txt
exit 0

The %1 is a parameter.  If you open up the command prompt, navigate to the directory of your batch file the type >batchfile.txt 20 you will pass in “20” and then the “goto donotdel” will get processed and you’ll skip the delete steps.

So back to our theoretical example. You think you backup strategy is awesome, heck maybe it even worked once.  So you decide to push it on all of your co-workers and they also like the idea.  Your boss calls to give you a raise and you drop the phone on the keyboard which happened to have one of those spreadsheets open.  Now the IT department instead of having expenses totaling $92,020 have $920,220 in expenses.  You save, close and your backup runs at 6pm overwriting your last good copy.  Or maybe it is worse and you deleted an entire column and have no way of knowing what the numbers were.

So lets have that .bat achieve the spreadsheets.  Think of it as the poor mans source control.

for %%A in (*.xls*) do copy /y "%%A" "D:test%%A"

If you use this command what you are doing is looping through filenames matching *.xls*. Source control in a basic form allows you to rollback you changes by retrieving any version of a file.  You won’t achieve that functionality but you will be able to rollback to any given day if you incorporate a timestamp in the file.  So with this example you set a system variable to the date and create a directory to place the files into.

cd test
set YYYYMMDD=%date:~-4,4%%date:~-10,2%%date:~-7,2%
if not exist %YYYYMMDD% mkdir %YYYYMMDD%
for %%A in (*.xls*) do copy /y "%%A" "D:test%YYYYMMDD%%%A"

Scripting and automation is easy, but they create new problems.  If you learn to rely on automation without periodically making sure it is working you will suffer the consequences.  For example, if you installed a dvd burner into you computer then every day after that you come in and plug in your flash disk to pick up your spreadsheets you will be disappointed they won’t make it do you newly mapped E: drive.  If you had successfully restored files a few times you could have slightly lost the importance of saving your spreadsheets frequently.

Once you weight in the costs to the benefits I believe you will make the right choice and continue to automate.  Choosing to automate with batch files means your batch files will multiply rapidly.  When they multiply you will also find that you are creating most of the lines over and over again with small changes.  Its the copy/paste school of scripting.  Its difficult to avoid but you should try.  What frameworks in programming are all about is avoiding having to re-invent the wheel each time you need to travel. With a framework we can hop in the car and go.

When I analyze the example above to determine how I can avoid copy and pasting this code I think what the next batch file will look like.  Then compare the two and see what changed.  This way you can determine what the parameters should be.  The next step to build you framework is to replace the 4 possible parameters( Drive, Current Directory, extension, destination )

cd %2
set YYYYMMDD=%date:~-4,4%%date:~-10,2%%date:~-7,2%
if not exist %YYYYMMDD% mkdir %YYYYMMDD%
for %%A in (*%3*) do copy /y "%%A" "%4%YYYYMMDD%%%A"

Save the file above and use this to pass in all the parameters.

CALL dailybkp.bat D: test .xls D:test

I know its a very loose usage of the term “framework” but dailybkp.bat can be used as a foundation for development.

Leave a comment

Posted by on August 2, 2010 in Uncategorized