RSS

VMware 6.5 LACP Configuration

I just got back from Cisco Live in Las Vegas and brought a nasty cold with me. With the extended time off work, I came back to a sizeable pile of “priorities”. One of the more interesting/challenging things I was working on was getting LACP going. We had some new rackmounts that were partially configured and had ESXi installed. They had two 10Gb ports which were handling all the traffic on standard virtual switches. The networking guy wanted to turn on LACP which is a best practice, but we couldn’t get it going at first for various reasons. It is one of those settings that has to be either on or off and both the host side and switch side have to be the same. Now that the project is nearing some deadlines, we decided to give it another go.

There are a couple key reasons you might want to setup Link Aggregation Control Protocol on uplink ports. 1. For faster failover in the event of a switch or port going offline. 2. Higher bandwidth for a single logical uplink.

VMware does a pretty nice job of handling failover without this turned on. VMs run on a single connection and jump over to the other connection if there are issues. So that might be reason enough not to go through the extra hassle of setting up LACP. Another reason is you might get yourself in a chicken/egg catch 22 scenario. If your vmkernal management ports have to run LACP, and you have a vCenter that runs on a host with only LACP available… you might have a hard time configuring your virtual distributed switch. You might be able to log into a remote console of the host and revert management network changes in 6.5, but I have not tested this. For this reason, I recommend using some different (perhaps a pair of onboard 1Gb ports) for your ESXi management network.

So step one is to get your host online with a VMware standard switch. Then you can deploy the vCenter 6.5 appliance to this host. You will need this to configure LACP. I would also recommend using the standard switch for vCenter and ESXi traffic. This can be done by editing the port group on the vCenter appliance VM.

On the physical switch side, you must setup a vPC. This is done by configuring a port channel on each switch port, then a virtual port channel that pairs the two ports.

Then in vCenter, you create a distributed virtual switch. Under the configuration tab there are LACP settings. First create a Link Aggregation Group. You will want to set this to active so the NIC will negotiate with the physical switch to aggregate the links. Create one LAG with the number of ports you will have for VM traffic in your entire cluster. This is one step that confused me. The documentation says to create one LAG per port channel ( https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.networking.doc/GUID-34A96848-5930-4417-9BEB-CEF487C6F8B6.html ), however, VMware handles creating a LAG for each host and you only need to create the overall LAG for the distributed switch. So basically I got one host setup pretty easily, but then when I went to setup my second host, I couldn’t add the second LAG into the uplinks options because two LAGs are not supported.

Once you create the LAG, you can now add hosts to the distributed switch and assign the physical NICs as uplinks with the LAG selected.

Lastly, create portgroups for each vlan. Then you can assign the LAG uplinks under teaming and failover for each port group.

Advertisements
 
Leave a comment

Posted by on July 16, 2017 in Network Admin, Virtual

 

Just Reboot

Normally I’d rather dig into a problem and figure out what is going on rather than just reboot… but since June is right around the corner and my last post was in February I’m in need of a quick writing fix.

 

Now that I have rebooted, I don’t have a lot of evidence to troubleshoot why there was issues. So, only one way to go, forward!

Today’s topic is patching software from the early 90s.

March 14th, 2017, Microsoft came out with this critical Security Bulletin: https://technet.microsoft.com/en-us/library/security/ms17-010.aspx

Ransomware WannaCry uses this vulnerability to take over and spread itself. SMB1 was spawned from an effort to get the DOS local file system to be a network file system. Microsoft’s implementation piled on a bunch of features and became cool right around the time these pants hit center stage.

 

 

 

 

 

 

 

SMB not a secure server service? Where have I heard this before? MS08-067 was an epic vulnerability that was exploited years after the patch was released. I remembered that bulletin number off the top of my head and I’m sure MS17-010 will get baked in there too. After all, MS17-010 is in metasploit now.

So this many years later, why is this an issue? Mostly backwards compatibility because SMB2&3 are not vulnerable but the server service happily falls back to SMB1 if you let it. We have these absolutely infuriating printers that only support SMB1 so we can’t disable SMB one on the clients. I also blame cloud and devops. New server admins are exposing “things” to the internet for no reason other than convenience and ignorance. While I’m pointing fingers, lets roast Microsoft for creating massive downloads and overly complicated and slow patching systems. Not only are they slow but they are also fragile. Users are not innocent either, they must click all the attachments in their email spawning malware in privileged network locations. Also, users don’t want to agree to lengthy monthly maintenance windows.

Now that we know it is everyone elses fault, what can I do better? In my position, persuasion is a powerfull tool. Automatic maintenance and regular automatic testing are the only ways to scale. More cattle for servers and fewer pets that require special needs. Persuading people that 24/7 uptime is not necessary and more swift automatic patches will take some work. Convincing application developers to handle boot order rather than relying on the servers to boot up in a specific order would help make automation easier and fix a bunch of other potential issues in the process. Creating simpler architectures and not unnecessarily complex microservices. Controlling server sprawl with proper documentation and life-cycle management.

So how did I handle this specific critical vulnerability? It was a team effort for sure. The work really starts as the servers are being built. Requestors either accept a default windows patching window or ask for a custom one. There are automated tests in place that allowed us to proceed with confidence. We used our montioring systems to identify low OS drive space and I check the vCenter database to cross and double check before the package was pushed out. There was a validation script on github that I used to monitor progress https://github.com/kieranwalsh/PowerShell/blob/master/Get-WannaCryPatchState/Get-WannaCryPatchState.ps1 And finally, go through manually and repair the unpatched servers or migrate the workload to newly built servers.

Overall, patchapalooza 2k17 went pretty well. Sometimes I think vulnerability databases have too many false alarms and its hard to pick out what is really serious issue. Forcing out that many reboots can have significant risk to systems, especially if you don’t have the people to handle the increased calls.

 
Leave a comment

Posted by on May 24, 2017 in Security

 

#SQLSat595 Cleveland Recap

Preparation

I’m sorry for all the people who were affected by the multiple battery issues with the Note 7 but I simply could not find a Dell, HP, Microsoft, or Lenovo laptop that even came close to the notebook 7 spin.

One upgrade had to be done right away though… FLASH… na naaaaaaaa!

Upgrading the Samsung notebook 7 spin with a Samsung 850 EVO 500GB M.2 SATA II V NAND SSD. First, summon your internal zen patience. Next, use a tool thinner than a credit card to remove the single plastic snappy back cover.

167_dollar_upgrade

This computer is now a fantastic addition to my fleet and worked out very well for my 3 VM mobile lab required for my presentation on Hacking SQL Server.

I’ve given this presentation at user groups and SQL Saturdays. Each time I make a solid effort to get the latest versions of Windows Server and SQL Server ready. I also grab the latest hacking tools I can find. There is no room for procrastination when preparing because you don’t know what life will throw at you right before showtime. The organizers did a fantastic job of communicating all the changes to time slots and rooms. The interest survey was an objective way of placing the popular sessions in larger rooms and when it comes time to to have our SQL Saturday, that will definitely be a tool we will take advantage of.

More Preparation

The team building exercise, aka speaker volunteer dinner, usually takes place the night before. This is important to build comradery and increase the comfort level of a bunch of people who may be in the area for the first time. It can also feel like a family reunion, running into that crazy uncle and cool hip west coast cousin that makes for some hilarious conversation. One downside is the combination of loud talking over music and hops until late at night can cause dehydration and raspy voices in the morning. Nothing some phlegm building orange juice and over the counter medications can’t fix.

janis_in_the_auditorium

SQL Saturday is full of awesome presentations. Attending very useful presentations, right before I have to give mine, causes me to go on unnecessary tangents. I really wanted to watch more speakers in action, but decided to chill out for most of the morning. I did follow and thoroughly enjoy Janis’s 12 step program and can attest that the program works. I’ll be modifying my own less effective program. I was missing some steps and definitely will take advantage of the scripts around finding the worker bees behind CXPACKET.

The lunch was great and gave me a little extra time to familiarize myself with the microphone before my session. The wireless mic was on point (thanks AV) and the monitors facing the speaker really helped. The auditorium could seat over 100 people and had 3 projectors.

Show Time

http://www.sqlsaturday.com/595/Sessions/Details.aspx?sid=57510 Most content is uploaded, feel free to contact me using the email address on the last slide.

The demos went smoothly but perhaps a little slowly. I used zoomit in some spots but failed to do it in others. I had plans for two more quick demos but ran short on time. Some previous feedback I had was to engage the audience more. Since then I have added a task for the assigning an attacker, and a victim to remember the IP addresses. Also, I added another person to give me a 30 minute warning which worked out perfectly. Also, I make a point to hack a poorly configured server, but not to show the misconfigurations until later in the presentation. This has, without fail, invoked some comments/questions about the layers of security that would block the hacks I am explaining. This is always a good sign that some people are understanding the content and makes me happy.

I specifically asked the attendees to think of constructive criticism on their evaluations and if they couldn’t think of anything, make something up. This is my favorite:

best_review

If you want to attend more SQL Saturday’s, you have to be willing to participate in Volunteering, Speaking, and Sponsoring these events. If those things don’t fit into your life at this time, remember attendees are why we do what we do so make a point to check the schedule and go to a SQL Saturday!

Cheers!

 

 
Leave a comment

Posted by on February 5, 2017 in PASS

 

I’m speaking Saturday, Feb 4th in Cleveland, OH

http://www.sqlsaturday.com/595/Sessions/Details.aspx?sid=57510

It has been quite a while since I’ve spoke at a SQL Saturday, partly because Michigan has taken a pass on hosting one the last couple years. These things are tough to get going and I have great respect for the few select individuals that grab the bull by the horns and see the event through to the end.

The last time I was caught off guard by a major upgrade to my hacking tools. Brand new stuff is not something a speaker should try to implement 2 days before the presentation for the sake of the demos and for the sake of their attendees. But I couldn’t resist and it actually went quite well.

Hopefully my history with these risks right before the presentation don’t blow up but I have another last minute enhancement. I trashed all my VMs and installed a 500GB V NAND SSD. O boy, is it sweet. Hopefully, it will help my presentation fit into the 60 minute time slot.

I’ll follow up with a recap of the event because I also plan on soaking up some valuable skills from the other attendees and speakers.

 
Leave a comment

Posted by on February 1, 2017 in PASS

 

Query SQL Server from PowerShell

Consistency is king. Either consistently broken, or consistently working, automation will help you get there.

PowerShell has come a long way. Early on I avoided it and I’m happy I did that. Still to this day, we don’t have a consistent version of PowerShell across our servers. Execution policies are broken. All the code examples online didn’t provide me with a simple way to do the garbage in, garbage out data processing I needed. Time and time again, my first couple attempts would work, then I would find that some servers just wouldn’t cooperate. Error handling was a bear. But there were so many blind fans out there who would look down their nose at me I started to wonder if I was in the wrong.

Fast forward five more years and PowerShell has gotten a lot better. Working with rest APIs, tight integration with VMware, SharePoint, Exchange, and easy .csv manipulation got me started and learning the language. But even today, we still don’t have an elegant way to manage data with SQL Server.

The power is in the “|” for most things PowerShell. With great power, comes terrible error handing. Most things I will end up doing don’t really need to pipe data from one cmdlet to another so I’ve tossed in the towel for now.

Getting Started…

First, get administrator on your windows machine.

Then, lets get the latest bits. SQL Server 2016 Developer Edition is now free. That really helps us bloggers and speakers not have to load up a 180 day evaluation edition and then find out it has run out right before our talk. SSMS is also free

After checking out your local instance and connecting with SSMS to make sure you did everything right, lets enable powershell by doing the dreadful execution policy changes and module import.

Run powershell as administrator and these two commands. The first one giving you some information to switch it back if needed.

Get-ExecutionPolicy
Set-ExecutionPolicy unrestricted

For the remainder we can run without administrator.  I’d suggest opening up the Windows PowerShell ISE and you can get some syntax highlighting in the top Script window. To do anything with SQL we’ll want to run this command

Import-Module SqlServer

Phew! That was a lot of setup.

Lets actually write some code!

I am not a fan of one liners. Its great in a conversation to say that was 1 line of code to trivialize the complexity but we are realist here and we are going to use some variables so we don’t have to feel like we are reading the worlds longest run-on sentence.

$ConnectionString = 'Server=localhost;Database=VMDB;Trusted_Connection=true'
$sql = 'SELECT name,physical_name FROM sys.database_files'
$results = Invoke-Sqlcmd -ConnectionString $ConnectionString -Query $sql

If the stars have aligned, after you run those three commands, you will have some results in the $results variable. Lets take a look at the results.

$results.name[0]

Taking a look at the query, we are getting 4 cells back if the database you have in your connection string has the default of 2 files, we get 2 rows in our $results variable. We also get a file “name” and a “physical_name” column. So we can reference the column by name, and the row, assuming we know the number of the row we want with an integer.

Making it useful by accepting variables…

Lets take a look at adding a dynamic where clause. I’d like to filter on the name column by whatever the user wants me to. In this example we will build the query string by asking for input with read-host.

$filename = read-host "Enter database file logical name"
$ConnectionString = 'Server=localhost;Database=VMDB;Trusted_Connection=true'
$sql = "SELECT name,physical_name FROM sys.database_files WHERE name = '$filename'"
$results = Invoke-Sqlcmd -ConnectionString $ConnectionString -Query $sql
$results

This is what I love about blogging. I can act like I got that right on the first try! I did not. Not even the second try, or third. I’m not going to mention how many tries it took. That said it is fairly easy with one catch, the quotes.

This particular problem(debatable) with PowerShell makes it rather tricky to build a string with a variable. Notice my two strings $ConnectionString and $sql are either surrounded in single quotes or double quotes? The double quotes allow me to include single quotes in the string. I also get to include the variable $filename and not get the literal “$filename” but instead the variable value.

Summary

This code is crap. You should not allow a user to free form enter text into a query string. That allows for SQL injection. In my case, the scripts I need will be protected and I will be the user so it is not a concern.

Appendix

I changed my mind, I am going to include the embarrassing details which is my PowerShell command window I used to write this blog. I find it hilarious to make fun of my coworkers when I see their screen full of red text on a blue background.

typical_powershell_window

I figured I’d include the text too in case someone is search for errors.

PS C:\Users> $ConnectionString = ‘Server=localhost;Database=VMDB;Trusted_Connection=true’
$sql = ‘SELECT name,physical_name FROM sys.database_files’
Invoke-Sqlcmd -ConnectionString $ConnectionString -Query $sql
Invoke-Sqlcmd : The ‘Invoke-Sqlcmd’ command was found in the module ‘SqlServer’, but the module could not be loaded. For more information, run ‘Import-Module SqlServer’.
At line:3 char:1
+ Invoke-Sqlcmd -ConnectionString $ConnectionString -Query $sql
+ ~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (Invoke-Sqlcmd:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CouldNotAutoloadMatchingModule

PS C:\Users> Add-PSSnapin SqlServerCmdletSnapin100
Add-PSSnapin SqlServerProviderSnapin100
Add-PSSnapin : The Windows PowerShell snap-in ‘SqlServerCmdletSnapin100’ is not installed on this computer.
At line:1 char:1
+ Add-PSSnapin SqlServerCmdletSnapin100
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (SqlServerCmdletSnapin100:String) [Add-PSSnapin], PSArgumentException
+ FullyQualifiedErrorId : AddPSSnapInRead,Microsoft.PowerShell.Commands.AddPSSnapinCommand

Add-PSSnapin : The Windows PowerShell snap-in ‘SqlServerProviderSnapin100’ is not installed on this computer.
At line:2 char:1
+ Add-PSSnapin SqlServerProviderSnapin100
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (SqlServerProviderSnapin100:String) [Add-PSSnapin], PSArgumentException
+ FullyQualifiedErrorId : AddPSSnapInRead,Microsoft.PowerShell.Commands.AddPSSnapinCommand

PS C:\Users> import-module SqlServer
import-module : File C:\Program Files\WindowsPowerShell\Modules\SqlServer\SqlServer.ps1 cannot be loaded because running scripts is disabled on this system. For more
information, see about_Execution_Policies at http://go.microsoft.com/fwlink/?LinkID=135170.
At line:1 char:1
+ import-module SqlServer
+ ~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : SecurityError: (:) [Import-Module], PSSecurityException
+ FullyQualifiedErrorId : UnauthorizedAccess,Microsoft.PowerShell.Commands.ImportModuleCommand

PS C:\Users> Set-ExecutionPolicy unsigned
Set-ExecutionPolicy : Cannot bind parameter ‘ExecutionPolicy’. Cannot convert value “unsigned” to type “Microsoft.PowerShell.ExecutionPolicy”. Error: “Unable to match
the identifier name unsigned to a valid enumerator name. Specify one of the following enumerator names and try again:
Unrestricted, RemoteSigned, AllSigned, Restricted, Default, Bypass, Undefined”
At line:1 char:21
+ Set-ExecutionPolicy unsigned
+ ~~~~~~~~
+ CategoryInfo : InvalidArgument: (:) [Set-ExecutionPolicy], ParameterBindingException
+ FullyQualifiedErrorId : CannotConvertArgumentNoMessage,Microsoft.PowerShell.Commands.SetExecutionPolicyCommand

PS C:\Users> Get-ExecutionPolicy
Restricted

PS C:\Users> Set-ExecutionPolicy unrestricted
Set-ExecutionPolicy : Access to the registry key ‘HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell’ is denied. To change the execution
policy for the default (LocalMachine) scope, start Windows PowerShell with the “Run as administrator” option. To change the execution policy for the current user, run
“Set-ExecutionPolicy -Scope CurrentUser”.
At line:1 char:1
+ Set-ExecutionPolicy unrestricted
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : PermissionDenied: (:) [Set-ExecutionPolicy], UnauthorizedAccessException
+ FullyQualifiedErrorId : System.UnauthorizedAccessException,Microsoft.PowerShell.Commands.SetExecutionPolicyCommand

PS C:\Users> import-module SqlServer

PS C:\Users> Add-PSSnapin SqlServerCmdletSnapin100
Add-PSSnapin SqlServerProviderSnapin100
Add-PSSnapin : The Windows PowerShell snap-in ‘SqlServerCmdletSnapin100’ is not installed on this computer.
At line:1 char:1
+ Add-PSSnapin SqlServerCmdletSnapin100
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (SqlServerCmdletSnapin100:String) [Add-PSSnapin], PSArgumentException
+ FullyQualifiedErrorId : AddPSSnapInRead,Microsoft.PowerShell.Commands.AddPSSnapinCommand

Add-PSSnapin : The Windows PowerShell snap-in ‘SqlServerProviderSnapin100’ is not installed on this computer.
At line:2 char:1
+ Add-PSSnapin SqlServerProviderSnapin100
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (SqlServerProviderSnapin100:String) [Add-PSSnapin], PSArgumentException
+ FullyQualifiedErrorId : AddPSSnapInRead,Microsoft.PowerShell.Commands.AddPSSnapinCommand

PS C:\Users> $ConnectionString = ‘Server=localhost;Database=VMDB;Trusted_Connection=true’
$sql = ‘SELECT name,physical_name FROM sys.database_files’
Invoke-Sqlcmd -ConnectionString $ConnectionString -Query $sql

name physical_name
—- ————-
VMDB C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\VMDB.mdf
VMDB_log C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\VMDB_log.ldf

PS C:\Users> $results=Invoke-Sqlcmd -ConnectionString $ConnectionString -Query $sql

PS C:\Users> $results

name physical_name
—- ————-
VMDB C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\VMDB.mdf
VMDB_log C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\VMDB_log.ldf

PS C:\Users> $results.name
VMDB
VMDB_log

PS C:\Users> $results.name[0]
VMDB

PS C:\Users> $ConnectionString = ‘Server=localhost;Database=VMDB;’

PS C:\Users> Invoke-Sqlcmd -ConnectionString $ConnectionString -Query $sql
Invoke-Sqlcmd : Login failed for user ”.
At line:1 char:1
+ Invoke-Sqlcmd -ConnectionString $ConnectionString -Query $sql
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (:) [Invoke-Sqlcmd], SqlException
+ FullyQualifiedErrorId : SqlExectionError,Microsoft.SqlServer.Management.PowerShell.GetScriptCommand

Invoke-Sqlcmd :
At line:1 char:1
+ Invoke-Sqlcmd -ConnectionString $ConnectionString -Query $sql
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ParserError: (:) [Invoke-Sqlcmd], ParserException
+ FullyQualifiedErrorId : ExecutionFailureException,Microsoft.SqlServer.Management.PowerShell.GetScriptCommand

PS C:\Users> $filename = read-host “Enter database file logical name: ”
Enter database file logical name: : VMDB_log

PS C:\Users> $filename
VMDB_log

PS C:\Users> $filename = read-host “Enter database file logical name”
$ConnectionString = ‘Server=localhost;Database=VMDB;Trusted_Connection=true’
$sql = ‘SELECT name,physical_name FROM sys.database_files WHERE name = $filename’
$results = Invoke-Sqlcmd -ConnectionString $ConnectionString -Query $sql
Enter database file logical name: VMDB_log
Invoke-Sqlcmd : Invalid pseudocolumn “$filename”.
At line:4 char:12
+ $results = Invoke-Sqlcmd -ConnectionString $ConnectionString -Query $ …
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (:) [Invoke-Sqlcmd], SqlPowerShellSqlExecutionException
+ FullyQualifiedErrorId : SqlError,Microsoft.SqlServer.Management.PowerShell.GetScriptCommand

PS C:\Users> $filename = read-host “Enter database file logical name”
$ConnectionString = ‘Server=localhost;Database=VMDB;Trusted_Connection=true’
$sql = ‘SELECT name,physical_name FROM sys.database_files WHERE name = ‘ + $filename
$results = Invoke-Sqlcmd -ConnectionString $ConnectionString -Query $sql
Enter database file logical name: VMDB_log
Invoke-Sqlcmd : Invalid column name ‘VMDB_log’.
At line:4 char:12
+ $results = Invoke-Sqlcmd -ConnectionString $ConnectionString -Query $ …
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (:) [Invoke-Sqlcmd], SqlPowerShellSqlExecutionException
+ FullyQualifiedErrorId : SqlError,Microsoft.SqlServer.Management.PowerShell.GetScriptCommand

PS C:\Users> $sql
SELECT name,physical_name FROM sys.database_files WHERE name = VMDB_log

PS C:\Users> $sql = ‘SELECT name,physical_name FROM sys.database_files WHERE name = ”$filename”
The string is missing the terminator: ‘.
+ CategoryInfo : ParserError: (:) [], ParentContainsErrorRecordException
+ FullyQualifiedErrorId : TerminatorExpectedAtEndOfString

PS C:\Users> $sql = ‘SELECT name,physical_name FROM sys.database_files WHERE name = ”’$filename”’
The string is missing the terminator: ‘.
+ CategoryInfo : ParserError: (:) [], ParentContainsErrorRecordException
+ FullyQualifiedErrorId : TerminatorExpectedAtEndOfString

PS C:\Users> $sql = ‘SELECT name,physical_name FROM sys.database_files WHERE name = `’$filename`’
At line:1 char:74
+ … name,physical_name FROM sys.database_files WHERE name = `’$filename`’
+ ~~~~~~~~~~~
Unexpected token ‘$filename`” in expression or statement.
At line:1 char:83
+ … name,physical_name FROM sys.database_files WHERE name = `’$filename`’
+ ~~
Unexpected token ‘`” in expression or statement.
+ CategoryInfo : ParserError: (:) [], ParentContainsErrorRecordException
+ FullyQualifiedErrorId : UnexpectedToken

PS C:\Users> $sql = ‘SELECT name,physical_name FROM sys.database_files WHERE name = `’$filename`”
The string is missing the terminator: ‘.
+ CategoryInfo : ParserError: (:) [], ParentContainsErrorRecordException
+ FullyQualifiedErrorId : TerminatorExpectedAtEndOfString

PS C:\Users> $temp = ‘this is ” a quote’

PS C:\Users> $temp
this is ‘ a quote

PS C:\Users> $sql = ‘SELECT name,physical_name FROM sys.database_files WHERE name = ”$filename”
The string is missing the terminator: ‘.
+ CategoryInfo : ParserError: (:) [], ParentContainsErrorRecordException
+ FullyQualifiedErrorId : TerminatorExpectedAtEndOfString

PS C:\Users> $sql = ‘SELECT name,physical_name FROM sys.database_files WHERE name = ”$filename”’

PS C:\Users> $sql
SELECT name,physical_name FROM sys.database_files WHERE name = ‘$filename’

PS C:\Users> $filename = read-host “Enter database file logical name”
$ConnectionString = ‘Server=localhost;Database=VMDB;Trusted_Connection=true’
$sql = ‘SELECT name,physical_name FROM sys.database_files WHERE name = ”$filename”’
$results = Invoke-Sqlcmd -ConnectionString $ConnectionString -Query $sql
Enter database file logical name: VMDB

PS C:\Users>
PS C:\Users> $results

PS C:\Users> $sql
SELECT name,physical_name FROM sys.database_files WHERE name = ‘$filename’

PS C:\Users> $filename = read-host “Enter database file logical name”
$ConnectionString = ‘Server=localhost;Database=VMDB;Trusted_Connection=true’
$sql = ‘SELECT name,physical_name FROM sys.database_files WHERE name = ”’ + $filename + ”’
$results = Invoke-Sqlcmd -ConnectionString $ConnectionString -Query $sql
The string is missing the terminator: ‘.
+ CategoryInfo : ParserError: (:) [], ParentContainsErrorRecordException
+ FullyQualifiedErrorId : TerminatorExpectedAtEndOfString

PS C:\Users> $sql = “SELECT name,physical_name FROM sys.database_files WHERE name = ‘$filename'”

PS C:\Users> $sql
SELECT name,physical_name FROM sys.database_files WHERE name = ‘VMDB’

PS C:\Users> $results = Invoke-Sqlcmd -ConnectionString $ConnectionString -Query $sql

PS C:\Users> $results

name physical_name
—- ————-
VMDB C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\VMDB.mdf

PS C:\Users>

 
Leave a comment

Posted by on December 6, 2016 in Uncategorized

 

#sqlsat552 Lincoln, NE recap

Before you call me crazy SQL guy (I think that name is already taken) I had several motives for driving 10 hours to Nebraska. In no particular order:

  • My wife and I work in different states, Lincoln happens to be on the way.
  • We like nature, this one was in a State Park
  • There was a cabin available for 3 nights and at a decent rate that allowed dogs
  • SQL Saturdays are awesome
  • Free sessions and networking
  • Free volunteering (sounds weird but if you have done it, you understand)
  • I’m considering coordinating my own SQL Saturday

However, with every opportunity there is a cost

  • All-in I probably spent $700 I would not have spent otherwise
  • I missed some football games (no big deal)
  • I missed opening day of hunting season (kind of a big deal)

Here are some of my notes from the sessions I attended

Getting Started with Extended Events – Andy Galbraith
Switching from a working solution to something new when you don’t necessarily have to can cause some anxiety. This anxiety causes paralysis. Sometimes we all need a little nudge to really get going.
I was a little confused about the statement that extended events fire synchronously which causes them to be lightweight. It is a little bit of a contradiction I still have to sort out but I think I understand that some parts are always running, and you can asynchronously choose to record them.
The key bits are packages which contain events, actions which is the info you track, and predicates which filters like a where clause
The event file is a .xel file and is least impactful one of a few ways to save off the information
System_health is like the default trace and always running, don’t modify this.
Use caution which what events you pick and where you store the events, query plans can cause a lot of overhead and storing in the buffer can bloat in memory
Set state=start to make the XE work

Get Your Optimizer to Give up All Its Secrets – Brian Hansen
Brian had a nice organized flow which made me compare his style to more of a college professor
Logical order of operation is important and starts with From to ON
Intellisense doesn’t work until you have FROM
He pointed out some confusion with select meaning something different in Algebra(it’s a filter). In Algebra a projection is like a select
The query tree is the output of the algebrizer and he had a nifty app to visual query trees
Distinct is like a group by
Contant scans are usually good because they can help avoid data access – ex: where clause A>50 and A<50

Tips and Tricks for Performance Troublshooting – Phillip Podhrasdsky
Phillip had a faster pace and covered a lot of ground for an intro session.
Query Plans are key along with statistics. Stats have 200 levels, so you might not get stats for every item in a “colors” table. Creating additional filtered stats can help with this issue

Query Plan Deep Dive – Kevin Boles
Kevin is a great speaker and had a well-organized slide deck.
Ref: Paul White – cross apply blog post
Spot Nested loop joins can be inefficient if a large data pipe is on top
The optimizer will sometimes choose to sort the data to do some joins
Merge join works well if there is order to both tables
Hash joins – the table is built in memory and will spill to tempdb, works well if not ordered
Stream aggregates require order (non-blocking)
Fast first – can cause more overall work, but create a better user experience for poorly tuned query
Asserts- check things like True or False conditions (ex: finding a key to delete)
Pick fill factor intelligently
Tsqlt.org
ADO.net timeout is 60sec ish
Optimize for unknown or recompile are options if you run into sniffing issues
UDFs will generate warnings in the query plans and cause some bad estimates

Dammit Jim! Dr McCoy’s Field Guide to system_health (and the default trace) – Ed Leighton-Dick
Finding answers to “who done it” with the things that record data by default
System_health is XE and trace if profiler – they both get trimmed after a while

Afterwards we met up at one of the Peter Kiewit Lodge cabins for some drinks and food. I talked to the organizer a bit and got some great information from volunteering and the after party to help me consider organizing my own SQLSaturday.
I had a great time and look forward to attending more SQLSaturday events next year.

 

 
Leave a comment

Posted by on December 3, 2016 in Uncategorized

 

PASS Summit 2016 Recap

Introduction

I have a fairly massive bundle of lanyards in my drawer at work. vmworld, ciscolive, emcworld, sql connections, sql rally, grr con, bsides detroit, a bunch of sql saturdays and more. All are great learning experiences and if I didn’t have to do actual work, I’d love to attend more conferences. I definitely plan to attend the PASS Summit next year and many years to come.

All employers should commit to a couple weeks of training for all associates. Most new people can learn from the experience of more senior members of their team but both the trainer and trainee need to be available for this to work. Classroom training from companies like New Horizons can come in handy for those associates who don’t like to travel or for companies that don’t have a good mentor in that field. Conferences that require travel can sometimes cost 3-4k but with the right person, are totally worth it. In fact, sending multiple associates to the same conference gives them a chance for teambuilding somewhat outside the stuffy bounds of the office.

Being a PASS chapter leader grants me free admission to the 3 main conference days. That got my reported expenses down to 2k, which the majority of that being flight, hotel and cabs. Chapter leader is a time consuming job, but very rewarding in many ways including this free ticket.

The Summit stands out from all other conferences I’ve been to for a couple reasons. First, the content is mostly not vendor content and more relevant to the “how” part of my job. Tips and tricks instead of products are explained in many sessions. We as attendees are mostly not getting sold a particular product because a majority of the speakers are community speakers. Sure, definitely lots of fans of SQL Server and not much mention of Oracle/MySQL/Postgresql/DB2/etc.. but I think you get the point. Second, it has a feeling of a family reunion for a lot of people. We have that crazy uncle, annoying younger brother, CEO cousin, and a poor bunch of Cleveland Indian fans. Networking is hard but the most beneficial aspect of the conference. There seems to be a greater number of people willing to meet new people at PASS than any other conference I’ve been to. There are also many other PASS events that give you an opportunity to reconnect with these people.

Without further ado, here is my play-by-play.

Monday

Alaskan Airlines (23” rule is the largest free carry on that I am aware of)

I may have liked to attend a pre-con this day but I would have had to fly out Sunday.

Tuesday

Registration

Chapter Leader Meeting

I didn’t know this last year but this is a mandatory meeting for chapter leaders who get the free Summit ticket. Lots of good ideas discussed for my local chapter (http://glass.sqlpass.org) in regards to sponsorship, speakers, and locations.

Welcome Reception

A single drink ticket was provided and cash bar after that. Light food and very tasty. Great chance to network.

Wednesday

Keynote

Discussed HTAP, hybrid transaction analytical processing. Basically the reason we do ETL is to avoid too much stress on the primary system. Some of this can go away with multiple object types that store the same data in different ways. In-Memory OLTP table with a Cluster Column Store Index can serve both reports and transactions at the same time. This is interesting, but the reason CCSI was read only in version 2014 was because updates to this type of structure are slow, updates can happen in version 2016, but I’m not sure how fast they are.

Microsoft R can be installed on SQL Server or on a standalone server. They have promised huge performance gains through multithreading and scale out options. Not sure if there is a licensing benefit for us to go standalone, built into SQL, or just keep using open source R.

Azure IO bandwidth is 4x any of the other cloud competitors. SQL Server running on linux was demoed, even works with AlwaysOn availability groups. Isn’t that appealing to me and won’t be GA until later 2017.

Architecting SQL Server for Mission Critical and High Performance (HPE Vendor Session)

Discussed some superdome hardware architecture. One interesting trend is using NVDIMMs. This non-volatile memory can be used to harden log records. DAX bypasses the storage stack, but can still be configured as a hard drive in windows. SQL can take advantage of this for a secondary small log file with a trace flag and eliminate LOGWRITE waits. The size (8GB) and availability of these devices is prohibitive.

Inside SQL Server In-Memory OLTP

Bob Ward has a fast paced, in-depth speaking style that I like. I attended a couple of his sessions and he made his slides available at http://aka.ms/bobwardms

The goal of In-Memory OLTP was to be 100x faster that previous versions. It sounds like they got about 30x faster, mostly due to the removal of latching. The important durability differences between an In-Memory table and a regular engine table was discussed. No UNDO process is required for durability because there is no concept of write ahead logging. These transactions are just dropped on failure. 701 is the error we will get if we don’t size memory properly to the table we want to enable for In-Memory functionality. There are management objects, but they don’t show up in the database system catalog since they are in-memory tables and recording that information in regular tables would slow transactions down. There are two main types of indexes, hash and range. Bucket sizes need to be determined for hash indexes but they are much faster if configured properly. XTP_OFFLINE_CKPT is a wait that is reading the transaction log and making decisions to flush to disk. This isn’t a wait that affects transactions but runs asynchronously.

Lightning Talks

Vendor Reception

Nimble Storage and Pure Storage were there, not sure if any other storage vendor was. Talked to AWS about some of the things they do. Providing a database as a service separate from a database server as a service might be worth us looking into. VMware was there and working the booth was one of the speakers from EMCworld and CiscoLive. I got some information about some of the max VM sizes in the wild and found out it might be worth trying to put Actuarial SQL Servers as a single VM on a host.

Thursday

Keynote

It was refreshing to see several options in cloud data warehousing presented. Dr. David Dewitt compared a roll your own DW with Amazon’s redshift and Microsoft’s SQL DW and Snowflake. He talked about Oracle RAQ and Teradata. I liked the comparison of different vendors with a fairly open minded approach at a conference that focuses on one vendor. He explained some of the pros and cons of any scale out data platform.

SQL 2016: It just runs faster

Clock speed hasn’t improved much in 10 years but the number of cores per processor has gotten really high. Several pieces of the SQL engine needed to be re-written to process more in parallel. Parallel redo, parallel insert select, automatic scaling of number of threads and more partitioning allows for more parallel operations in 2016. We need to work on our ISVs to support later versions of SQL Server if we want to improve overall user experience.

Recovery Models Made Simple

Great session that explains recovery and a lot about the SQL Server log file. Active transactions that affect space used in the log file was explain very well. Virtual log files and their reuse was explained. Fn_dblog(null,null) can be used to explore more information stored in the log file.

Monitoring and Alerting of Availability Groups

Picked up a lot of useful tips and tricks for the couple AGs we have in production. The SSMS dashboard is pretty useful for monitoring performance. Reviewed the basic steps of an AG transaction. Overall makes me more comfortable with 2016 AGs even though we try to avoid them since it is a copy of the data and a windows cluster on top of a vmware cluster.

Vendor Session: Storage Testing

Learned about a fairly new tool VD bench which can simulate more realistic virtual workloads. With dedupe&compression inline, synthetic tools do not provide a while lot of useful information. Argenis and David and awesome speakers.

Microsoft R Sessions

I attended a SQLCAT session and introduction session to try to get a handle on how best to manage R for Actuarial. The open source R doesn’t operate in parallel and Microsoft has created some open source ScaleR functions (rx*) that are comparable but operate in parallel. I saw an interesting demo of predicting the date that a database is going to run out of space with R. That might be a good way to practice with usage of R that is built into SQL Server. There wasn’t much around memory management with R built-in other than I learned it can spawn a lot of extra processes outside of the SQL process.

 
Leave a comment

Posted by on November 21, 2016 in Uncategorized

 

That Time I Made SQL Server Too Fast

Things break. We don’t usually intend them too, they just do. Sometimes, maybe we do intend them to break. I’m still suspicious about cars breaking on purpose so we will buy a new one. Other times, they break in ways we don’t expect, like these times I made queries respond too fast.

For example, several years ago I was a fairly new DBA. I wasn’t used to the enterprise class roasting one gets when things are not quite right. We had a multi tier application, DB, app server with a desktop application that started to respond with errors after a hardware upgrade of the SQL Server.

Darn. We tested everything we could. We reviewed the code and sent it through upgrade analyzer. We restored production backups and tested those on the new hardware.

But it wasn’t enough load for the system to break in a peculiar way. The server had a limited number of threads to deal with a response from SQL. Seriously, hard-coded value of 3 thread to go to SQL, get a visual basic script stored in the database, and execute that on the app server. I kid you not, sometimes the sausage factory isn’t worth the tour.

This dynamic script was now loading up these 3 threads all day, and it was a lengthy script so sometimes it would hit the other hard coded timeout values and the users would start to get errors in large numbers instead of just waiting a little longer that usual.

We tried, and tried some more to work with the vendor. But in the end I had to make SQL slower so the app could keep up. We moved this SQL Server back to old hardware until we could also upgrade the app server.

The most recent time I made SQL too fast was a couple days ago. Another third party vendor who is XML happy. Surprisingly, the problem had nothing to do with XML.

Re-wind several years ago, I convinced a co-worker that SQL 2012 would be a great choice for their application even though the vendor didn’t technically support it yet. Who cares! Microsoft supports 2012, not this vendor. We tried and tried to get the thing to work but the application just cooperate with SQL 2012. Fortunately, we didn’t have any data so we just reinstalled the app and used SQL 2008R2. I conceded defeat, probably too soon and rolled it back after delaying their upgrade a couple weeks.

Back to a few days ago, that same co-worker fell for my tricks again with a different app. This time SQL 2016, because Bob Ward says it just runs faster ( http://www.sqlpass.org/summit/2016/Sessions/Details.aspx?sid=51194 ). Let me tell you how hilarious this error is as briefly as I can.

This XML happy application has a boatload of tables. Fairly normalized but lots of chatter with XML back and for the the application even with just a single user. We are testing a new version and a single user was getting a primary key violation. Consistently happens on SQL 2016 so I had him send it back to the vendor, even though he was threatening physical violence. Given our prior experience of jumping on a new version, I don’t blame him. No way could this be SQL 2016, primary keys are are similar across all versions right?

Vendor comes back with some confused email. “We can’t recreate the issue.” Blah blah, co-worker convinces me to give him access to create the database on a SQL 2012 instance and bam, no error. The beatings start. Even though he knows its bogus, still I have bruises and my fear of the new shiny is reinforced.

Are you ready for the shocking news? SQL 2016 is fast. Really fast. Even with absolute garbage design and code, its fast. That is how we got primary key violations.

The table we are working with has a primary key. But the interesting part is, it is a composite key meaning there are a few columns that create uniqueness between rows. So two of the columns can be identical across rows, as long as the third is different. Then we are unique.

Well, in an absolutely vomit enduing way, the vendor chose a datetime datatype to create this uniqueness. Well, when several rows come back in less than 3ms, even with a boatload of garbage XML to process, we no longer have uniqueness because datetime isn’t precise enough.

Not sure what they might consider a fix, but I explained to them what is happening. Hopefully, they don’t jump on the datetime2, knee jerk reason and go with an actually unique key.

 

 

 
Leave a comment

Posted by on November 21, 2016 in Uncategorized

 

vmworld 2016 keynote

I’ve been lucky enough to be able to attend vmworld once in 2013 but didn’t get to go this year. I decided to take advantage of the free live stream of the keynote on Monday.

Pat Gelsinger took the stage with 23,000 of “his closest friends” a.k.a vmworld attendees, and had a few interesting things to say. First, he no longer makes a distinction between digital and non-digital businesses. Truth is, I cannot think of a business that will grow into the next decade without embracing technology. They may not need VMware and the cloud but with the new generation seeking products and services exclusively online, every business will need a way to engage these new customers.

After that he shared some interesting data points:

– in 2006, only 2% of the cloud was considered public with most of that being salesforce.com
– in 2016, about 80% is still traditional IT, meaning the automated service deployment or future service based IT is still only 20%
– 2016 is the year that 50% of hardware is in some form of public cloud and shifting from private datacenters rapidly
– 2016 70% of workloads function in tradition IT and by 2021 that number will be down to 50%

Servers are becoming more powerful, and consolidation is improving. Pat doesn’t see this as a shrinking market but actually generating growth by making the cost of servers more accessible to the masses. He compared this to processor enhancements of the 90s and I agree with this assessment.

Enterprises are finding cases where the public clouds make a lot of sense but there will (always?) be lots of cases where the existing owned datacenter will have advantages and certain services will be provided in-house.

The most interesting announcement from the keynote was VMware’s cross-cloud platform. This was a higher level GUI for managing services on all the big public clouds, and private clouds. There was a demo where you could enter the login information for AWS and this platform would discover the applications hosts under this account and present them in a uniform fashion with other cloud services.

This service is interesting but it appears to only work with IBM’s cloud witch advertised “500” customers are on. That doesn’t seem like a lot of market penetration to me. If this cross-cloud platform doesn’t work with all the cloud vendors, it doesn’t make it very far in my opinion.

Another buzzword I picked up on was Automated Lifecycle Manager. This sounds like product that has reached its end of life (https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2015661) There is an obvious need for this when the demand for services increase with the decrease of cost for cloud services. We have to automate the full lifecycle to prevent sprawl and complexity. I wonder if VMware is coming out with something new on this front and is why it was mentioned.

The guest speakers from IBM, Marriot and Michael Dell all seemed to fall flat to me. None had any really innovating or inspiring comments. I’m not sure who the speaker was but towards the end they started talking about service architectures. One comment I did like was choosing multiple cloud providers for a single service in case one cloud goes down. This has happened and even though it could cause a lot of complexity, could be something we would want to architect around.

When I attend conferences like this, I usually to find something more valuable to do with my time during the general sessions. In conclusion, the session wasn’t that bad. My view wasn’t totally negative like this post (http://diginomica.com/2016/08/29/vmworld-2016s-keynote-snoozer-is-no-worry-for-the-infrastructure-innovators/) but since my opportunity cost was low, I don’t regret tuning in.

 
Leave a comment

Posted by on August 30, 2016 in Virtual

 

MTC Chicago Visit

Microsoft has datacenters that are open to customers for training and consulting of all kinds. I’ve been to the newer one in Detroit and to the MTC Chicago twice now with the most recent time being this week. It was a good visit sponsored and encouraged by our storage vendor.

The key topic being ways to manage copies of data. We have some specific needs surrounding reporting, agile development, and quality assurance that request several copies of large databases. Thin provisioning, compression, and deduplication help control costs but several arrays do best when the copy is a snapshot done at the block level. This reduces the cost of having that copy and keeping it around. These users however, generally want to create the copy at the database level. This has a few complexities, even in a fairly standard virtual environment.

Some of these copies require the source system to be quiesced so the snapshot technology needs to integrate with your database technology to cause a pause in writes while the snapshot completes. SQL handles crash consistency very well, but if some require quiescing, I may as well setup a system that does that for all of them. The virtual layer has abstraction layers between what the guest operating system sees and what the storage system presents. This also needs to be coordinated. All of these separate scripts that use different APIs need to be scheduled and some needs to be able to be called from another application or run on demand.

This problem has been mostly solved by some of the automation I have done surrounding backup and restores. We have a working solution but it takes technical time and more storage than it needs. Whenever you have a solution that is good enough, it prevents you from creating a great solution. It is a good solution up to a point until the number of copies, and time it takes to make a copy become prohibitive.

It was a great trip to see another datacenter in action that has the up and coming technologies. It gave me motivation to work on the great solution instead of settling for one that barely works.

 
Leave a comment

Posted by on July 29, 2016 in SQL Admin, Storage