The many different approaches to performance tuning SQL Server

Ever since we started the GLASS user group this spring, I’ve had the idea that we would have a lightning talk style meeting. This is where we have several shorter presentations instead of one long one. My goal was to get newer speakers a chance to dip their toes in the water and help build on a full session that they could present later.

Everyone has a different approach to tuning SQL Server. Different is good, at least on this topic. There can be a lot of friction when trying to troubleshoot where the slowness is happening especially when an organization has a lot of silos. If the tier 1 support has to talk to tier 2 support who has to talk to a developer who has to talk to a server admin who has to talk to a dba who has to talk to a storage admin who… you get the point. I want to get as many perspectives of real world solutions to performance problems together in the same room. Some may think of it as a WWE style smackdown but I think the collaboration would be insanely beneficial.

I couldn’t have been more right :]

We had Kyle talk about implicit conversions specific to SSRS, Mike talk about partitioning, Dave talked about the optimizer, Tom talked about the speed of a single DECLARE or multiple DECLARE statements and I wrapped it up with performance triage with metrics, queries and real world fixes.

The performance tuning process is just that, a process, not a single answer to a problem. There are several ways to approach slowness of an application, and it depends on the situation of how you proceed. Dive right into active queries? Look at the VM CPU graph? Fire back with a bunch of questions? I’ve personally taken all of these angles and found some successes, and a bunch of failures along the way.

Leave a comment

Posted by on September 20, 2015 in PASS, SQL Admin, SQL Dev


Cisco Live 2015 San Diego Recap

This was an impressive conference! Photo Alubum

I flew in Sunday and enjoyed a day getting familiar with sunny San Diego and getting a good nights rest before my test Monday morning. I passed the CCENT exam after a grueling month of preparation. I’m not sure what direction I will take with my Cisco certifications, but the CCNA Data Center track looks appealing and has some overlap with what I have already learned.

The number and quality of sessions makes me consider this conference one of the best I have attended. I’ve been to VMworld, EMC World, SQL Rally and SQL Connections and this one ranks at the top for overall quality. I’d recommend it to anyone remotely close to managing a network.

I focused on storage networking, security and UCS for the sessions I attended. I was able to get some time at the whiteboard with a Fiber Channel expert who helped me walk through a possible upgrade path. In the storage networking sessions I had some interesting discussions about flash arrays with my peers. Looks like a lot of people are getting into testing “seed” units that were provided for free.

The conference food was just ok but the exibit hall had some good appetizers and drinks. The Gas Lamp Quater is a hotbed of excellent restaurants including Fogo De Chao which well worth my $70 for dinner. The entertainment was great. OK GO opened up the conference keynote and Areosmith rocked Petco park. Mike Rowe had some hilarious stories and a good closing message.

I like to travel and learn about technology. Its always re-invigorating to attend a conference and I hope there are many more in my future.

Leave a comment

Posted by on July 3, 2015 in Uncategorized


ICND1 100-101 Study Progress 2

I have reached page 682 of the Odem book which is where I am going to stop. Now I am going finish typing up my notes. Next I will use the attached CD to quiz myself to figure out what areas I need to brush up on in the coming weeks.

CHAP19 Subnet Design p533
– count the bits know the powers of 2
– 2^10 is 1024 and that is easy to remember

CHAP20 VLSM p561
– Old routing protocol doesn’t support vlsm (RIP)
– no additional config to get this to work
– be able to find overlap of networks to troubleshoot

CHAP21 Route Summarization p577
– strategy used for performance to lower the size of routing tables
– subnet design should have summarization in mind
Steps to finding the best summary route
1. list all decimal subnets in order
2. note low and high points
3. pick the shortest prefix length mask and subnet -1
4. calculate new potential network mask summary

CHAP22 Basic ACLs p599
– ACLs most common use is a packet filter
– can match source and/or destination
– match packets for QoS
– to filter a packet you must enable acl on the interface either enter or exit
– NAT uses ACL permits
– when processing ACL list router uses first match logic
– ex command: access-list 1 permit
– To figure out wildcard, get mask and subtract

*know where the best place to put the ACL is and on what router in the path

CHAP23 Advanced ACLs p623

ACLs are numbered or named
– to make a change to the list, must delete the whole list and reconfigure
– extended ACLs allow for more packet headers to be searched
– example command: access-list 101 permit protocol SIP wildcard DIP wildcard
– example command: access-list 101 deny tcp any gt 1023 host eq 23
– keywords can be used instead of port #s (HTTP instead of 80)

Named ACLs, differences
– easier to remember
– subcommands not global
– allows single line deletion

numbered ACLs allow for new style of command

config t
do show ip access-list 24

– use the “enable secret” command
– username secrets if external auth not available
– disable telnet
– avoid using simple password checking
– disable unused services
– use ACLs to secure SSH
– extended ACLs close to source
– Standard ACLs close to destination
– Specific ACLs early in list

enable secret myPass
-this sets the password of myPass to reach enable mode

CHAP24 NAT p653
– CIDR route summarization
– classless interdomain routing
– inside local: local ip assigned to host
– inside global: what the internet knows your network as. address used to represent inside host as packet hits internet
– outside global: public ip outside enterprise (the ip of the URL you are trying to access)

PAT is port address translation
pic on p664-uses source port to return traffic to proper client
NAT troubleshooting
-don’t mix up ip nat inside and ip nat outside addresses
-don’t mix up local and global addresses in this command: ip nat inside source static
-dynamic NAT uses ACLs, check these
-PAT uses the overload command on ip nat inside source command


I took a couple 10 question tests from the CD. The idea was hit some chapters that I struggled with which were, WANs, ACLs and NAT. I got 6 out of 10 questions right which isn’t all the great.

Next I took a test of the first 5 chapters of the book. I scored 8 out of 10 right which is passing for the book test. The only concept I wasn’t sure on was crossover cable pin numbers and when to use a straight through and crossover cable. I knew like devices use crossover cables but that alone didn’t help me get the two questions right. I may memorize this table for the test.

routers Hubs
pcs Switches
1,2 3,6
Leave a comment

Posted by on May 30, 2015 in Network Admin


ICND1 100-101 Study Progress


I’m starting to see the fruits of an aggressive study plan. Here we are, May 23rd, roughly two weeks until test time and I am nearly on track.

Part I: Networking Fundamentals
Part II: Ethernet LANs and Switches
Part III: Version 4 Addressing and Subnetting (Be done by May 11th and practice subnetting)
Part IV: Implementing IP Version 4 (Be done by May 18th and practice show commands)
Part V: Advanced IPv4 Addressing Concepts
Part VI: IPv4 Services (Be done by May 26th, decide if I want to skip IPv6, Review OSPF and practice more advanced subnetting)
Part VII: IP Version 6
Part VIII: Final Review (Be here by June 1st and have taken a practice exam to decide what areas to review)

I got off to a rocky start with an older 2008 version of the book. Fortunately my study buddy had purchased the correct book instead of borrowing an old one. I had gotten two chapters into the old book and before I started to really get into the newer edition that took a week to recieve. I decided to take a practice test early on. The test is very configurable. I chose study mode for 45 questions and limited myself to 90 minutes with a small chunk of whiteboard. I also decided to exclude any IPv6 questions from this first stab.

After two chapters and a couple videos on subnetting I was able to get a 600 which is 200 points away from passing. This was on the practice test that came on the CD in the book. The higher layer concepts I did quite well on where as the lower layer concepts such as Routing, WANs, ACLs and any kind of IOS commands and configuration questions I did very poorly on. Subnetting seems to get a lot of attention either directly, or indirectly and I was sitting at about 50% or less on that.

What is subnetting?

Don’t listen to me, I’m not an expert, but I don’t think there are many good explanations of this out there. A lot of people go way deep and off on tangents to frequently. Here is my overview of what I understand are important subnetting concepts for ICND1.

IP Address = 32 bits = 4 Octects = 4 bytes

Each byte can store 256 possible combinations of 1s and 0s. So lets represent in binary, 00001001.00000000.00000000.00000001

See, that is 32 bits in an IP address.

The second concept we need to understand is the netmask. Picture a mask you might put on your face. A very thick mask you won’t be able to see much. A thin mask you might be able to see a lot.

Take that concept and apply it to this very common netmask, or 11111111.11111111.11111111.00000000

Out of all the possible combinations that is a pretty thick mask so I can only see a small number of hosts with that mask. If you combine the IP & netmask, you will be able to see IP address from or 256 possible hosts.

And there you have it, networking. Wait, what was I talking about? Ah yes, SUBnetting.

Subnetting takes those 256 possible hosts and divides them into smaller networks. If I needed several separate networks and only 18 hosts per network I could split that network into smaller chunks. If I want to see fewer hosts in my network I need a thicker, or higher number mask.

Pulling up the /24 mask again, 11111111.11111111.11111111.00000000 you will see it is /24 because there are 24 1s or network bits and 8 0s or host bits.

In our problem, we need at least 18 IP address options for hosts. For this we will use 0s. How many 0s will we need? Less than 8 for sure because that gave me 256 options. But how many less?

The powers of 2 come in handy for any binary math. There are 2 possible values for each bit, 0 or 1. With 2 bits there are 4 possible values, 00, 11, 10, 01. That isn’t going to get me to at least 18 hosts. This could take a while and for the ICND1 test you need to subnet in 15 seconds. Yikes!

In comes the cheat sheet.


Memorize this formula to go with the table: Possible hosts on a network = 2^h – 2

Each network supports 2^h ip addresses, however 1 ip address is used for the network id and another is used for the broadcast address, hence the minus 2 part.

I don’t suggest just memorizing the table. I would suggest understanding how to generate the table. Start from the top right and do your powers of 2 up to 128. 2^0 = 1, 2^1 =2 2^2=4 … 2^7=128

Next is the second row, the decimal mask. Take 256 – the h row to get the decimal mask row.

Next is the last 2 octets of cidr notation. This is simply a count of 1s in the binary representation of the mask. Remember 1s are the network bits and 0s are the host bits.

Once we have this table we can solve our problem, subnet in a way that supports at least 5 networks and at least 18 hosts in each network.

Start this question with the important number h, or 18.

Go to the table and find the h value that supports at least 18 hosts, which is 32.

Go down to the decimal notation .224 and we know that we can support at least 18 hosts with a decimal mask of

Next we can list the network IDs that this mask could possibly create

To figure this out mathematically take 2^n where n = the number of network bits. There are 3 network bits or 1s in the octect we subnetted. We can make 8 networks which is greater than 5 required by the problem. BOOM CAKE!


For the remainder of this post I will be simply typing up my notes from the Wendell Odom Cisco Press Book and some other notes I took watching YouTube videos from a variety of authors which I will link to.


1. Physical – wiring standards, physical topology, bandwidth usage, syncronizing bits
2. DataLink – MAC, Flow Control standards
3. Network – IP, IPX, Switching, Route Discovery, TTL
4. Transport – TCP, UDP, windowing, buffering
5. Session – Netbui
6. Presentation – jpg, encryption, data formatting(ascii, ebcidic)
7. Application – http, smb, smtp, service advertisement, dns

IP Addressing

First Octects
CLASS A – 1-127
CLASS B – 128-191
CLASS C – 192-223

Hub – layer 1 device that simply spams all ports with frames

Rember these things in this order
SEGMENT – includes the tcp ports
PACKET – includes the IP
FRAME – the whole stinking thing with headers and trailers

Encapsulation – IP Packet is a Layer 3 PDU

CHAP2: Fundamentals of Ethernet Lans

UTP – unshielded twisted pair

crossover cable

like devices need crossover cable to switch transmit and receive pins

MAC – 48bits – 24 for OUI

FCS – frame check sequence is at the end of the frame to ensure proper delivery


leased line , service provider
CPE – customer premises equipment
CSU/DSU – channel service unit, data service unit usually on prem and RJ-48
Router-Router communication can occur on serial cables
HDLC – high level data link control
——way of encapsulating frames over WAN
PPP – point to point protocol
MPLS – multi protocol label switching

CHAP4: IPv4 Addressing and Routing

Routing uses L3PDUs
Layer 2 are called frames

IPv4 headers are 20 bytes and include SIP,DIP,Len,offset,chksum,ttl,etc…

CLASS A: 126 networks and 16,777,214 hosts per network
CLASS B: 16,384 networks and 65,534 hosts per network
CLASS C: 2,097,152 networks and 254 hosts per network

Router FWD logic
1. uses FCS to make sure no errors
2. discard old frame header and trailer
3. compare DIP to routing table and find next hop
4. encapsulate

CHAP 5: fundamentals of TCP/IP transport applications

UDP – connectionless

Connection establishment
SYN —->

Connection termination

shutdown – command that turns a port down/down
no shutdown – turns a port up/up (the second up is if the protocol works)

CHAP 8: configuring Ethernet Switching

enable secret mYpass
show history
no shutdown

port security
1. switchport mode access (access or trunk)
2. switchport port-security (enables port security)
3. switchport port-security maximum 2 (allowed macs on port)
4. switchport port-security violation shutdown (action to take)
5. switchport port-security mac-address AAAA:AAAA:AAAA (specifiy allowed macs)
6. switchport port-security mac-address sticky (dynamic learned mac addresses)

CHAP 9: implementing VLANs

ISL = OLD protocol

12bits for VLANID (this is a “shim” in the frame)
how many vlans? 2^12 or 4096
vlanid 1 is default

router on a stick – one physical link to a router instead of two

show vlan brief

(allow port 4 to communicate on vlan id 10)
1. enable
2. configure terminal
3. interface FastEthernet0/4
4. switchport access vlan 10

Layer3 switch does routing …but can’t do this in packettracer :[

Reasons switch prevents VLAN traffic from crossing a trunk
1. removed from allow list
2. vlan doesn’t exist in show config
3. vlan doesn’t exist, been disabled

and some other less important reasons

CHAP 10 Troubleshooting

show cdp neighbors
show interfaces status

“administratively down” means shutdown command was run
err-disabled means port security

vlan troubleshooting
1. identify all access interfaces and their vlans
2. do vlans exist and are they active
3. check allowed vlan list on both ends of the trunk
4. check for trunk/no trunk neighbors

show vlan brief


IPv4 subnetting

One subnet for every:
1. vlan
2. ppp serial link
4. frame relay

VLSM – variable length subnet mask


CHAP 12 analyzing classful IPv4 Networks
CHAP 13 analyzing subnet masks
CHAP 14: analyzing existing subnets

CHAP 15: Operating Cisco routers

Installation steps
1. connect lan ports
2. connect CSU/DSu external
3. connect CSU/DSU internal
4. connect console port to pc using a rollover cable
5. connect power
6. power on

show ip route

show mac address-table

status layer 1/status layer 2
down/down : has not been shutdown but physical layer problem

CHAP 16: configurating IPv4 addresses and routes

1. choose to process frame
-proper mac (is its destination me?)
-no errors (FCS)
2. de-encapsulate packet
3. compare DIP to routing table
-this identifies outgoing interface
4. encapsulate
5. transmit

routers should ignore switch floods not intended for it

large routing tables can cause performance problems

cisco express forwarding
-uses organized tree and other tables to help speed up routing

adding routes can be done via:

1. connected routes
2. static routes
3. routing protocols

cisco will add routes if the interface is IP’d and UP

ROAS 802.1Q trunk


commands to turn on

router ospf 1
network area 0

ospf – open shortest path first – uses link state
OSPFv2 is for IPv4

routing protocol – set of messages, rules and algorithms (RIP, EIGRP,OSPF,BGP)

routed & routable protocol – defines packet structure and addressing (IPv4)

1. learn routing information about ipsubnets from neighboring routers
2. advertise this info
3. if more than 1 route exists, pick best
4. if topology changes, advertise current best route (convergence)

Interior gateway protocol – designed for use inside a single autonomous system
exterior gateawy protocol – BGP

routing algorthims use
1. distance vector
2. advanced distance vector
3. link state (ospf uses this)

RIP is old
IGRP is a little less old

RIP-2 uses hop count and is also old with slow convergence
OSPF is a cost based protocol
EIRGP – cisco proprietary and uses bandwidth and latency
IS-IS – uses link state

0 connected
1 static
20 BGP E
110 OSPF
115 IS-IS
120 RIP
200 BGPI

this will show the database of link state advertisements(LSAs)
show ip ospf database

routers must agree to be neighbors

configuration, this will turn on for any interface that matches 10.0.* because of the wildcards in network command

router ospf
network area 0


Discover – TO FROM

ip helper-address {dhcp server ip} – command for router that enables DCHP servers to sit outside of the subnet by changing SIP&DIP ( Thanks /u/Sprockle )

Leave a comment

Posted by on May 23, 2015 in Network Admin


Gearing up for another exam ICND1 100-101

The more I learn about networks, the less I tend to blame the network.

It was almost 20 years ago that I set a static IP address on my sisters computer and connected a cross over cable to my computer so we could play a game called Quake. She wasn’t that interested so I ran back and forth between the rooms and played by myself. This loneliness was resolved a few years later with a device that looked something like this


Point is, I’ve been doing this for a long time and I still don’t know jack. I don’t like to fail tests, so signing up for a test is going to help me learn. I would like to become a more well rounded datacenter administrator.


ICND1 100-101 is the first half of a valuable certification CCNA. I now have the book in hand and about 5 weeks to prepare. Normally, I would allow myself about 3 months with a book this size but opportunity has struck and I need to accelerate my pace.

Like Microsoft, Cisco is very open with their exam topics.

1.0 Operation of IP Data Networks 6%
2.0 LAN Switching Technologies 21%
3.0 IP addressing (IPv4/IPv6) 11%
4.0 IP Routing Technologies 26%
5.0 IP Services 8%
6.0 Network Device Security 15%
7.0 Troubleshooting 13%

These do not line up that nicely to the book topics. But I am going to attempt to cruise through the book which I have given myself some milestones below.

Part I: Networking Fundamentals
Part II: Ethernet LANs and Switches
Part III: Version 4 Addressing and Subnetting (Be done by May 11th and practice subnetting)
Part IV: Implementing IP Version 4 (Be done by May 18th and practice show commands)
Part V: Advanced IPv4 Addressing Concepts
Part VI: IPv4 Services (Be done by May 26th, decide if I want to skip IPv6, Review OSPF and practice more advanced subnetting)
Part VII: IP Version 6
Part VIII: Final Review (Be here by June 1st and have taken a practice exam to decide what areas to review)

The schedule is set, plans are in place, now it is time for me to do some reading.

Leave a comment

Posted by on May 9, 2015 in Network Admin


T-SQL Tuesday #065 – Slowly Changing Dimensions

tsql2sday150x150_thumb_2aa4ea0f I’ve been focusing a lot of my study time on data warehousing lately. I’ve been supporting the system and storage of data warehouses for a while but lately have been digging into the developer topics.

What I learned over the weekend is how to build a working, slowly changing dimension in SSDT. Thanks for the challenge #tsql2sday and @SQLMD!


The Problem

Dimensions are the tables we design to make data look good in a pivot chart. They are the tables that describe our facts. Customer is a good example of something that could be a dimension table. For my challenge I decided to use virtual machine as my dimension.

The problem is, what if a VM’s attributes change? 4 cores, that was yesterday.. today PRDSQLX has 24 cores. What if someone deletes a VM, how many cores did it have?

I can get the current status of my VMs by using the source system, but the problem is the history. I can pull a snapshot of what VMs I have in my environment every day from the source system. I could just make copies of that data and slap a “PollDate” column on the table. Viola, I have everything I need, and about 1000x more than I need.

There is the problem, how do I collect and save a history of my VM’s attributes?


Each column in my VM table can be of 3 basic types

Type 1. Simply overwrite this value… it changes a lot and I don’t care about history (eg. what host is the VM running on)
Type 2. add a new row to maintain history… if one column in my VM row changes, I get a whole new record in my dimension
Type 3. add a new column to keep a limited amount of history… add some columns like previous_num_cpus and previous_previous_num_cpus and move data to that as it changes

So we have to take the data we get on a nightly snapshot of the source, and compare it to what we have in the destination, then do a conditional split. I’m sticking to handling these differences:

New VM – insert with NULL validto (easy)
Deleted VM – change validto column (create staging table and do an except query)
Change in Type 1 Col – update existing VM row with NULL validto column, (easy)
Change in Type 2 Col – insert new row with NULL validto column, change previous record’s validto date (a little tricky)

That logical split can be made easier by using the Slowly Changing Dimension task in SSDT. It pops up a wizard to help you along the way and completely set you up for several failures which I am going to let you learn on your own :]

Step 1. Setup an initial loading package.

This will make it handy to restart your development.

Query the source in a data flow OLE DB Source
Tack on a few extra columns, validfrom, validto, isdeleted, sourcesystemid in the SQL command
create the destination table using the new button ( this is pretty handy to avoid manually lining up all datatypes )
use the new button again to create a dimVM_staging table for later
Add the task at the beginning of the control flow to truncate destination or dimVM table
Run the package and be careful not to accidentally run it since it has a truncate

Step 2. Create this monstrosity

Control Flow

Data Flow

It is actually not too terribly bad. When you add the Slowly Changing Dimension a wizard pops up and when all the stars align, all the data flow transformations and destination below are created.

If we focus on the top of the data flow first, it is easy to see I am pulling from two source systems and doing a union all. The interesting problem I had to solve was the deleted VM problem. The wizard didn’t do that for me. I knew if I had the staging table, I could compare that to the dimVM to see if anything was missing. If you want to find out what is missing, use an EXCEPT query. Once you find out what is missing (deleted VMs) we can update the validto field effectively closing up shop on that row but keeping the history of rows relating to that VM. I decided to add the isdeleted column to make it easier to find deleted VMs. This code is in the SQL Script task on the control flow.

update dimVM
set dimVM.validto = getdate(), dimVM.isdeleted = 1
from dimVM
inner join (
select vmid,vcenter from dimVM
where validto is null
select vmid,vcenter from dimVM_staging
) del
on dimVM.vmid = del.vmid and dimVM.vcenter = del.vcenter

One last little tidbit. If you make any modifications to the transformations that the SCD wizard created, you should document them with an annotation. If for some reason you have to get back into the wizard, it will recreate those transformations from scratch… ironically not maintaining any history.

Step 3. Profit

I hope you enjoyed hearing about my new experiences in the Slowly Changing Dimension transformation in SSDT.

1 Comment

Posted by on April 14, 2015 in SQL Dev


Final Preparation for 70-463 Implementing a Data Warehouse with Microsoft SQL Server 2012

This is a continuation of this post

Two fellow bloggers have been posting more quality information on this test.

When reading the book I skipped over all of the practice sections. I did read the exam tip sections inside of the practice but never actually practiced. I don’t have a lot of hands on experience with SSIS and even less with mds/dqs. I spent about 9 weeks making through the book while skipping the practice and most of the reviews. I probably would have needed an additional 18 weeks to properly make it through all of the practice or lab type sections of the book. Learn one, do one, teach one is my favorite method to mastery but with 2nd shot deadline, I didn’t have a lot of time to prepare.

To supplement, I attempted to find videos on youtube and watched videos on the Microsoft Virtual academy. Both sources were not very demo heavy. What I did find is CBT nuggets that give a 7 day trial. The 70-461 videos that I was able to watch were very high quality, fast paced and demo heavy. This is exactly what I needed at this time. I’d recommend a membership if you have a bundle of money burning in your pocket.

Since my trial was up I decided to type up my CBT nugget notes.

CBT connections managers
control flow -> doesn’t involve data
bottom level are private connection managers, a.k.a package level
right solution explorer is project level connection managers which are global
you can enable/disable sequence containers
precedence constraints, go to properties to define AND or OR logic
copy-> paste package connection managers
delay validation -> doesn’t check structure
email doesn’t have a port option but could purchase add-ins or write your own
fix for NULLs is COALESCE

Data Flow
rows, buffers, pipeline,transformations
raw file -> ssis only -> good for sharing data between packages
raw file -> good for resuming packages
recordset->variable used to loop through
for performance, aggregate at the source since that is blocking
import export col -> for blob data
term matching is like CTRL+F
blocking tasks take lots of memory -> sort, aggregate
partial-blocking -> merge chuncks

Data Quality services
cleansing matching
server is 3 databases
dqs client is used for creating KBs
creating a knowledge base
-open xls sheet -> job title list for KB example
-KB needs a domain, circle with * button is domain
State length of 2 is an example domain rule
composite domain (EX: address which includes city state zip)
reference data source RDS (ex: mellisa data for addresses)
KB’s get published
activity is automatically logged

Implementing DQS
data profiling task in SSDT
-profile types
–null ratio request
–pattern generator RegEx for formatting
–column statistics
-then specify column
Quick profile: runs against all columns
Open data profile viewer
suggested confidence level
corrected confidence level
DQS cleansing task
Job title source job_title _output
jobtitles table
newKB->domain->source column (survivor record)
the table with the + button to add a rule and use the Rule Editor

Implementing MDS
proactive management
people place concepts or things
non-transaction data is good for MDS
includes auditing and versioning
MDS Componenents(Database, config mgr, MD mgr, web service, mds model deploy, excel Add-In)
MDS Objects(Models: the container db, Entities: like tables, Attributes: like columns, Hierarchies, Members: Actual data)
Install requires powershell 2.0 and IIS 7.5, silverlight and a database
has integration with DQS
to deploy packages that contain data must use CLI (deploynew -package “” -model)

Data flow
merge join requires sort -> advanced editor, pick isSorted and the column
MetaData problems: double click on flow and change types
Lookup transformation
-cache connmgrs for re-use
–redirect rows
–multi output popup
slowly changing dimension task (wizard)
fixed attribute fail on change
changing attribute type 1 overwrite type 2 new records (history)
inferred member flag goes in dimension
blocking oledb command
redirect error rows to flat file

executing packages
dtexec.exe is fire and forget style
built-in SPs in ssisdb
catalog.set_obj_param value
restartable packages
-checkoint file
-tracking last successful step in control flow
project properties
-select file name
-set usage never
–if exist
-save checkpoints = true
-set property fail package on failure = true
to test, can set task property to force a failure

Leave a comment

Posted by on April 9, 2015 in SQL Admin, SQL Dev


More Preparation for 70-463 Implementing a Data Warehouse with Microsoft SQL Server 2012

This is a continuation of my previous post. This is just some very quick notes that I am posting for my benefit and so that readers may get an idea of the preparation necessary for this test. They are my notes from this book:

PART II: Developing SSIS Packages

simple data movement – can use import export wizard
complex data movement – SSDT
SSDT is visual studio shell used to develop IS,AS,RS projects
Control Flow connection managers can be package or project scoped
Connection manager types:
ADO – backwards compatibility – compatible with sql server
AS – analysis services
File – SSIS data type
Flat file – delimited file
ftp – security option is only basic auth
http – web services or file, no windows auth
OLE DB – sql server, will be removed in favor of ODBC
ODBC – open database connection
SMTP – basic email auth only

package scoped connection managers will override the higher level project scoped connmgrs

control flow tasks and containers
containers help control execution of tasks
transformations include
cleansing – remove invalid data or unwanted data
normalization – XML value to varchar
conversion – byte[] to varbinary(max)
translation – “F” to “Female”
data calculation and data aggregation
data pivoting and data unpivoting

ssis tasks categories, data prep, workflow, data movement, SQL admin, SQL maintenance

containers, for loop, foreach loop, sequence

Precedence Contstraints ( the arrows that come off of tasks)

success, failure, completion
dotted lines mean OR and solid means AND logic used when multiple tasks are involved in flow

Designing and Implementing Data Flow

Data Flow is a level deeper than the control flow
Control flow triggers data flow
data flow task builds execution plan from data flow definition
data flow engine executes the plan
*Validate external metadata – checks for existance of tables and objects and should be turned off if they are dynamically created
builk OLEDB = fast load
ODBC = batch
fast parse is available at the column level on some data types ( date, time, int )
Working with data flow transformations
-Blocking (ex: sort, aggregate) transformations that read all data in before passing any rows down the pipeline
-Non-Blocking -> lookup, multicast, conditional split or other row-by-row transformations
-partial-blocking -> merge, merge join, union all, data flows in chunks
cache transformations – good for multiple transformations on same data
import/export col – good for blobs
character map – upper case, lower, linguistic bit operations
advanced data prep: dqs cleansing, oledb command, slowly changing dimension, fuzzy grouping, fuzzy lookup, script component(
#NEW# Resolve references editor helps resolve mapping problems
Lesson 3: strategy and tools
lookup transformation caching
how to handle rows w/ no matches
sort is expensive, optionally perform sorts at source and use advanced editor to mark data as sorted
avoid update and delete on fact tables
do large table joins on database layer
do updates on loading or temp tables in set based sql operations
Chapter 6: Enhancing Control Flow
ssis variables and parameters
avoid retrieving external source variables more than once
parameters are exposed to the caller bu variables are not
parameters are read-only and can only be set by the caller
variables are helpful to reuseability
variables are user defined or system
variables can store rows foreach enum containers
-avoid storing large rowsets in memory/variables
variable data types
-object: last resort
Int16: -32,768 thru 32,768
UInt16: 0 thru 65,535
UInt32: 0 thru 4,294,967,295
Char: 65,353 unicode
Decimal: 28 or 29 significant digits
Variable Scope
-Package Scopre
—-Container Scoped
——–task scoped
property parameterization
explicit assignment
lesson 2: connmgr, tasks, and precedence constraint expressions
expression: combination of constants, variables, parameters, column refs, functions, and expression operators
-special ssis syntax close to C++
math functions: ABS, EXP, CEILING, etc…
precedence constraints can use AND/OR logic expressions
Lesson 3: Master Pakcage
just a normal package that uses the execute package task
use variables to expose results to parent
use project deployment model to make parameters available to child packages
use project scoped parameters
CHAP7: Enhancing Data Flow
Lesson 1: Slowly Changing Dimesions
-late arriving dims or early arriving facts
–1. insert row into dim, mark inferred… requires bit col
–2. use newly created surrogate key
–3. when loading dim overwrite inferred members
TYPE 1 SCD: overwrite
TYPE 2 SCD: keep all history
can use conditional split to see what columns changed
ex: source.fullname dest.fullname
using t-sql hashbytes can compare for changes
–then two cols for hash val Type1 & type2
use set based updates instead of wizard
Lesson 2: preparing a package for incremental load
dynamic sql
change data capture
Dynamic SQL in OLEDB source
1. select dataaccess mode of sql command and use ? to pass parameter
2. pass variable to sql command and use expressions to modify the sql string
cdc functionality – cdc source and cdc splitter
-ALL, ALL w/old, net, netw/update mask, net w/merge
lesson3: error flows
route bad rows – fail, ignore (copies null), redirect rows
chapter 8: creating robust and restartable packages
can set transactions at package control flow or task level
transactions use msdtc
transaction options are: required, supported, not supported
transactions work on control flow not data flow
can nest a not supported execsql that won’t rollback inside a transaction (ex: still want to audit on fail)
lesson2: checkpoints
save checkpoints need turned on, on package
creates a file and restarts if exists
starts from begining if not exists
lesson3: event handlers
can turn event handlers off for task
chapter 9: implementing dynamic packages
project level and package level connection mgrs and paramters
must be deployed to ssis catalog
parameter design values are stored in the project file
cannot change parameter value while package is running
property expressions are evaluated on access
lesson2: package configs
enable package deployment model
can get parent package configs
chapter10: auditing and logging
logging: package configuration
auditing: dataflow trnasformation component
lesson1: logging packages
providers are: txt file, sql profileer, sql server, event log, xml
boundry progress exception
use parent setting is default
ssis control flows can be configured for logging
lesson2: auditing and lineage
elementary auditing – captures changes
complete – adds usage or read activity
audit transformation editor
lesson3: preparing package templates
keep packages in source control

Part IV: managing and maintaing ssis packages
ssis service is required in production
ssisdb new
package install utility is legacy
can use ssdt or ssms to deploy packages
project model or package model
dtexecui is legacy
can use TSQL, powershell, manual dtexec cli to execute packages
agent to schedule packages
introduced master package concept
securing packages: uses sql security concepts of principals and securables
ssis_admin role
ssis_user by default allowed to deploy, and deployer is allowed to read, modify, execute
Chapter 13: troubleshooting and perf tuning
breakpoints work only in control flow
breakpoints and fire on a hit count
data viewers on path will show grid view of data
use error outputs to catch bad rows
test with a subset of data
basic logging is default
switch to verbose when there are problems
data taps are like dataviewers for production
must be predefined using catalog.add_data_tap for specific data flow
lesson2: perf tuning
buffers are a group of data in data flow
determined automatically
Transformation Types
-non-blocking: row based synchronous
-partial blocking: asynchronous transformation
-blocking: asynchronous
backpressure controls flow for best memory control
max buffer rows – 10,000 default
max buffer size – 10MB by default
fast load on destination
full-cache lookups
avoid oledb transformations
BLOBs get swapped to disk
data flow engine threds
max concurrent executables -1 = # of logical processors +2
perfmon counter: buffers spooled

PART V: Building Data Quality Solutions

chapter14: installing and maintaining DQS
Soft dimensions: timeliness, ease of use, intension, trust, presentation quality
hard dimensions: accuracy, consistancy
Schema dimensions: completeness, correctness, documentation, compliance w/theoretical models, minimalization
activites: understand sources and destinations
security and backups managed through ssms
Chapter15: implementing MDS
metadata, transactional, hierachical, semi-structured, unstructured, master
MDM goals: unifying or harmonizing, maximize ROI through reuse, support compliance, improving quality
MDM: coordinated set of tools policies to maintain accurate master data
map master data dimensions to DW
Installing MDS: DB, Service(Needs IIS), Manager, Excel Add-IN
Creating MDS model
2.Entities(like tables)
3.Attributes(like columns)
Derived hierarchies: Recursive with TOP = NULL (ex: Org Chart)
Explicit Hierarchies – Organization can go any way
Collection: flat list of members
MDS service performs business logic
Chapter16: managing master data
MDS Packages
-Model deployment package to move data to another server
-wizard only includes meta data
-permissions are not included
-MDSModelDeploy command prompt if you want to move data
exporting – tsql on subscription views, webservice
Security, system admin (one user, tsql to change), model admin (complete model access)
entity permissions apply to all attributes
mds add-in for excel (connect to to http://server:8080)
when model and member permissions are overlapping read-only > updated and deny > *
excel add-in can use DQS KB matching
Chapter17: creating a data quality project to clean data
knowledge disovery
domain managment
reference data services
matching policy
domain: semantic representation of column
properties: data type, leading values, normalize, format, spellchecking
Term basic relation: Inc. -> Incorporated

I skipped 18,19,20: Advanced ssis and data quality topics because only 5 parts are listed on the exam prep and I ran low on time.

1 Comment

Posted by on April 8, 2015 in SQL Admin, SQL Dev


Preparation for 70-463 Implementing a Data Warehouse with Microsoft SQL Server 2012

I’m writing this post to force myself to spend some quality time with the materials for this exam. I have been at it for almost two months now and am nearing my exam date. I accelerated my plan so I could get into the 2nd shot window offered by Microsoft and also so I could finish my MCSA within 1 year. It has been a battle at times and is not an easy certification to get. Microsoft has really increased the difficulty since the MCITP for SQL 2008 which only required 2 exams.

My employer is assisting with the costs in a few ways. They will reimburse me for the cost of a passed exam. They are giving me a $500 bonus if when I pass all three exams and prove my MCSA. And they have loaned me the Training Kit book along with the other tests books that I have already returned.

My plan has been going fairly well. I’ve been able to put at least 10-15 minutes in about 6 days a week. Some of those have lasted and hour or more but that is pretty rare. Data warehousing is interesting to me because we have a lot of things starting up at work that may take off and require these skills. Before I started studying I had deployed only a few packages for my own small data collection and reporting tasks as an administrator. I also do not get too involved with database design since we rely on a lot of 3rd party applications. That world is changing for me and that is why I have been able to be a fairly good student for this last test.

So lets get to my plan.

The percentages are the first thing to note on this page:

11% – Design and implement

23% – Extract and Transform

27% – Load

24% – Configure and deploy SSIS

15% – DQS



I like to sit down with the book and read as much as I can while taking notes. I write down a lot. When I look at it later I think, “duh I knew that why did I write it down?” But it actually helps me stay focused. Even if I just write down the title of the section, it keeps me on track. At this point, I am ready to go back and review a lot of those notes and type them up so here they are.

The book is split out into those same 5 “Parts” as listed on the exam website.

Part 1: Design and Implement
Use snowflake in a POC since it will be easier to design from the complex OLTP environment.
Star schema for everything else.
Star is just a simplified, denormalized, merged, cleansed, historical schema with fewer joins
Star schema works well for SSAS cubes, SSAS won’t be on the test (phew).
A fact is: “Cust A purchased product B on date C in quantity D for amount e”
Dimension table: Customer, Product, Date
One star per business area
The Granularity level is the number of dimensions or depth you can slice by (thinks sales by quarter or sales by day)
Auditing: Who, What, When
Lineage: Where is the data coming from?
Dimensions: The goal is to make it look good in a pivot chart
-descretizing: putting values into bins and not keeping too much granularity because it doesn’t graph well
-Member Properties: columns not used for pivoting
Slowly changing: type 1- no history, overwrite; type 2 – keep history with current flag or validto-validfrom cols; type3 – limited history with additional cols like prevAddr
Keep business keys intact, create additional DW specific keys (surrogate keys) for linking fact to dimensions, probably INDENTITY
Use a SEQUENCE if you need to know the number before inserting, request multiple at once, or need a multi-table key
FACT TABLES: made up of FKs, Measures, Lineage cols, Business keys
consider the additivity of measures. EG: can’t sum an AvgDiscCol
Fact tables should be on the Many side of the 1->many relationship
Dimensions contain the lineage data
Age is a common computed column
design dimensions first, then fact tables
use partitioning on your fact table
Fact tables contain measures
Every table should have a clustered index
Do not index FKs of fact table because HASH joins dont need it?
If you are doing merge joins and nested loop joins indexes on FKs help
indexed views are useful in some cases
Row/page compression automatically applies unicode compression
batch mode is faster and will show in the query plan
column store indexes: one per table, not filtered, not on indexed views
Partitioning function maps rows to a partition
partitioning scheme maps partition to filegroups
aligned index: table with same schema which allows for partition switching
optimizer can eliminate partitions
inferred member: row added in dimension during fact table load

PART II: Developing SSIS Packages
To be continued…


Posted by on April 2, 2015 in Uncategorized


RocketTab Must Die

RocketTab is spyware that is passing itself off as adware. It proxies your http and https connections to the internet and injects boatloads of garbage ads into legitimate websites. This is hijacking with a lousy excuse of making your search “better” by modify your top search results. It is buggy which causes errors in browsing and is dangerously similar to the Superfish software that Lenovo was placing on its PCs. This method of MITM attacking to push ADs must die a painful death.

I’m not sure I like the ad supported direction that media is going. I’m also not sure I like paying for things either… and yes I understand the contradiction. What I am sure about is we need to scale the ads and general invasion of privacy back a notch or three. This software is getting installed without users understanding of what is happening. It is spawned from greed and lousy, immoral business practices.

I pay for Netflix, I rent movies, I go to the theater, I watch adds and I am ok with the collection of my viewing history for the sites that I intend to go to BY the sites that I go to like YouTube and Hulu. But I have recently had a first hand experience with this garbage called RocketTab.

That Dirty, Disgusted Feeling

I went for a trip to visit my mom and hopped on her computer because I forgot to set my out-of-office responses. I opened an incognito window and logged into my personal email and then was about to log into my work email when I noticed something strange.


That is definitely not the issuer of my work’s public webmail certificate. Fiddler is actually perfectly legitimate web debugging software. So am I correct in thinking that these lazy sloth developers of crapware reused the Fiddler certificate?

Normally, if the HTTPS part is green I don’t bother checking the certificate. For some reason we were just talking at dinner about Lenovo and their missteps so I got curious and checked. I consider myself security consious and I have already sent my personal email information to a man-in-the-middle attacker. I had almost sent over my work credentials too.

I started looking at netstat. I saw that when I would open the browser it was connecting to a proxy in the staus bar. I took a look at resource monitor and saw a boatload of public internet address that this “Client.exe” was connected to. Netstat showed Client.exe has a port 49181 listener. Chrome is supposed to be connecting to the public internet, not Client.exe.


The first thing I did was go into “Manage Computer Certificates” and delete the two Fiddler certificates from the root store. This was successful in changing the green chrome lock to a proper red error.

The next thing I did was remove the proxy from lan settings.


After that I removed “RocketTab” from programs via the control panel. As soon as this was done all the “Client.exe” connections went to TIMER_WAIT status because they were reset. RocketTab was the culprit.

The last thing I did was change all my passwords.

This man-in-the-middle attack on client machines needs to stop. This is a sneaky activity that is not something normal users understand. They generally don’t want the junk applications that these type of ad services support anyway. Users have been socially engineered to install this stuff and it is not clear how to get rid of it or that it is even running in the background. It is a poor business model that needs destroyed.

Leave a comment

Posted by on March 7, 2015 in Security


Get every new post delivered to your Inbox.

Join 160 other followers