RSS

RocketTab Must Die

RocketTab is spyware that is passing itself off as adware. It proxies your http and https connections to the internet and injects boatloads of garbage ads into legitimate websites. This is hijacking with a lousy excuse of making your search “better” by modify your top search results. It is buggy which causes errors in browsing and is dangerously similar to the Superfish software that Lenovo was placing on its PCs. This method of MITM attacking to push ADs must die a painful death.

I’m not sure I like the ad supported direction that media is going. I’m also not sure I like paying for things either… and yes I understand the contradiction. What I am sure about is we need to scale the ads and general invasion of privacy back a notch or three. This software is getting installed without users understanding of what is happening. It is spawned from greed and lousy, immoral business practices.

I pay for Netflix, I rent movies, I go to the theater, I watch adds and I am ok with the collection of my viewing history for the sites that I intend to go to BY the sites that I go to like YouTube and Hulu. But I have recently had a first hand experience with this garbage called RocketTab.

That Dirty, Disgusted Feeling

I went for a trip to visit my mom and hopped on her computer because I forgot to set my out-of-office responses. I opened an incognito window and logged into my personal email and then was about to log into my work email when I noticed something strange.

cert

That is definitely not the issuer of my work’s public webmail certificate. Fiddler is actually perfectly legitimate web debugging software. So am I correct in thinking that these lazy sloth developers of crapware reused the Fiddler certificate?

Normally, if the HTTPS part is green I don’t bother checking the certificate. For some reason we were just talking at dinner about Lenovo and their missteps so I got curious and checked. I consider myself security consious and I have already sent my personal email information to a man-in-the-middle attacker. I had almost sent over my work credentials too.

I started looking at netstat. I saw that when I would open the browser it was connecting to a proxy in the staus bar. I took a look at resource monitor and saw a boatload of public internet address that this “Client.exe” was connected to. Netstat showed Client.exe has a port 49181 listener. Chrome is supposed to be connecting to the public internet, not Client.exe.

ip_addresses

The first thing I did was go into “Manage Computer Certificates” and delete the two Fiddler certificates from the root store. This was successful in changing the green chrome lock to a proper red error.

The next thing I did was remove the proxy from lan settings.

proxy

After that I removed “RocketTab” from programs via the control panel. As soon as this was done all the “Client.exe” connections went to TIMER_WAIT status because they were reset. RocketTab was the culprit.

The last thing I did was change all my passwords.

This man-in-the-middle attack on client machines needs to stop. This is a sneaky activity that is not something normal users understand. They generally don’t want the junk applications that these type of ad services support anyway. Users have been socially engineered to install this stuff and it is not clear how to get rid of it or that it is even running in the background. It is a poor business model that needs destroyed.

 
Leave a comment

Posted by on March 7, 2015 in Security

 

Disaster Recovery

I have recently been sucked into all that is Disaster Recovery or Business Continuity Planning. Previously I have been a bit dodgy of the topic. I haven’t really enjoyed the subject because it always seems to distract from my focus on backups and local recovery. I liked to focus on the more likely failure scenarios and make sure those are covered before we get distracted. I’m not really sure if that was a good plan or not.

We would have to loose almost our entire datacenter to trigger our disaster recovery plan. A fire in the datacenter, tornado or maybe loosing our key storage array might trigger DR. Dropping a table in a business application isn’t something you want to trigger a DR plan. Developing a highly available, resilient system is a separate task from developing a DR plan for that system. It was very challenging to convince people to complete a discussion of the local recovery problems without falling into the endless pit of DR.

There seems to be two different business reasons for DR. 1. Complete a test of the plan so we can pass an audit once a year and 2. Create a plan so we can actually recover if there were a disaster. The first one comes with a few key caveats, the test must be non-disruptive to business, it cannot change the data we have copied offsite and it cannot disrupt the replication of the data offsite.

In a cool or warm DR site, the hardware is powered on and ready but it is not actively running any applications. If I were to approach this problem from scratch, I would seriously consider a hot active site. I hear metro clusters are becoming more common. Sites that are close enough for synchronous storage replication enable a quick failover with no data loss. A hot site like this would have many benefits including:
1. Better utilization of hardware
2. Easier Disaster Recovery testing
3. Planned failovers for disaster avoidance or core infrastructure maintenance

However, there are downsides…
1. Increased complexity
2. Increased storage latency and cost
3. Increased risk of disaster affecting both sites because they are closer

Testing is vital. In our current configuration, in order to do a test we have to take snapshots at the cold site and bring those online in an isolated network. This test brings online the systems deemed critical to business an nothing more. In an active/active datacenter configuration, the test could be much more thorough where you actually run production systems at the second site.

A most basic understanding of DR covers the simple fact that we now need hardware in a second location. There is much more to DR than a second set of servers. I hope to learn more about the process in the future.

 
Leave a comment

Posted by on February 7, 2015 in Hardware, Storage, Virtual

 

Reasons you can’t connect to SQL Server

“I can’t connect, can you look at the logs?”

Nope, not today, this is not how we do this. How is the server to LOG if it never receives the request? Do you think the server somehow magically anticipated that you wanted to run a query for the first time? What server are you even talking about???

Connection errors are generally logged on the client side. First read the message carefully and thoroughly for best results. The majority of common errors can be broken down into three categories:

Client issue
Network issue
Server issue

The nature of the word “Connection” means there is some fuzzy areas where two of the three CNS creatures rub uglies. There is a network adapter on the client and a network adapter on the server, and well.. there is a network.

Lets look at one of my more popular reasons you can’t connect to SQL Server, Login Failed.

So which is that, C… N… or S? I can pretty much rule out the network since the client received a message from the server. Maybe it is not even an issue at all, it is a feature I implemented to prevent you from logging into a production server. I really want to put it in the server category, but as I look back on actual cases, it is mostly the fact that access was never requested until it didn’t work. So that is a layer 8 issue with the planning protocol.

Long story short, I really wanted to categorize this list and also provide solutions but it really depends on the situation of the error. Hopefully, this list doesn’t grow much more since I have gotten better at anticipating people who may eventually want to connect to my databases. Without any further complaints, here are the reasons off the top of my head that you can’t connect to SQL Server:

1. You don’t have login access
2. Your login doesn’t have connect privileges
3. The Windows firewall is blocking you
4. The network firewall is blocking you
5. The login doesn’t have public access to the database
6. The server is out of memory
7. The server is actually down
8. The database is in single_user mode
9. The service account is locked out
10. SQL Authentication isn’t enabled on the server
11. You are trying SQL Auth when you should be using Windows Integrated
12. You are typing the password wrong
13. The login is locked out
14. The login is disabled
15. Server cannot generate the SSPI context
16. The service was started with option -m (single user)
17. The vmware host doesn’t have the correct vlan defined
18. The SQL Server’s ip configuration is wrong
19. The network switch doesn’t allow the vlan on that port
20. The distributed switch doesn’t have LACP enabled on that port group
21. The SQL Service is being updated
22. The Windows server is being updated
23. You are not specifying the non-standard port
24. You have the wrong instance name
25. You have the wrong server name
26. You have the wrong port
27. You communicated the wrong port to the network firewall admin
28. You are using the :port syntax instead of the ,port syntax
29. SQL is not set to listen on TCP/IP
30. You ran the C: drive out of space causing a cascading failure
31. You are not connected to the VPN
32. You are connected to the guest wifi
33. You bumped the wifi switch on your laptop

 
Leave a comment

Posted by on January 21, 2015 in SQL Admin, SQL Dev

 

5 9s Lead to Nestfrastructure (and fewer 9s)

Off the top of my head,

Microsoft DNS issue a handful of hours before xbox one launch(http://redmondmag.com/articles/2013/11/21/windows-azure-outages.aspx)

Widespread Amazon outages (http://www.zdnet.com/amazon-web-services-suffers-outage-takes-down-vine-instagram-flipboard-with-it-7000019842/)

NASDAQ (http://www.bloomberg.com/news/2013-08-26/nasdaq-three-hour-halt-highlights-vulnerability-in-market.html)

The POTUS’s baby (http://www.healthcare.gov)

I learned about 5 9’s in a college business class. If a manufacturer wants to be respected as building quality products, they should be able to build 99.999% of them accurately. That concept has translated to IT as some kind of reasonable expectation of uptime. (http://en.wikipedia.org/wiki/High_availability)

I take great pride in my ability to keep servers running. Not only avoiding unplanned downtime, but developing a highly available system so it requires little to no planned downtime. These HA features add additional complexity and can sometimes backfire. Simplicity and more planned downtime is often times the best choice. If 99,999% uptime is the goal, there is no room for flexibility, agility, budgets or sanity. To me, 5 9s is not a reasonable expectation of uptime even if you only count unplanned downtime. I will strive for this perfection, however, I will not stand idly by while this expectation is demanded.

Jaron Lanier, the author and inventor of the concept of virtual reality, warned that digital infrastructure was moving beyond human control. He said: “When you try to achieve great scale with automation and the automation exceeds the boundaries of human oversight, there is going to be failure … It is infuriating because it is driven by unreasonable greed.”
Source: http://www.theguardian.com/technology/2013/aug/23/nasdaq-crash-data

IMHO the problem stems from dishonest salespeople. False hopes are injected into organizations’ leaders. These salespeople are often times internal to the organization. An example is an inexperienced engineer that hasn’t been around for long enough to measure his or her own uptime for a year. They haven’t realized the benefit of keeping track of outages objectively and buy into new technologies that don’t always pan out. That hope bubbles up to upper management and then propagates down to the real engineers in the form of an SLA that no real engineer would actually be able to achieve.

About two weeks later, the priority shifts to the new code release and not uptime. Even though releasing untested code puts availability as risk, the code changes must be released. These ever changing goals are prone to failure.

So where is 5 9s appropriate? With the influx of cloud services, the term infrastructure is being too broadly used. IIS is not infrastructure, it is part of your platform. Power and cooling are infrastructure and those should live by the 5 9s rule. A local network would be a stretch to apply 5 9s to. Storage arrays and storage networks are less of a stretch because the amount of change is limited.

Even when redundancies exist, platform failures are disruptive. A database mirroring failover (connections closed), webserver failure (sessions lost), a compute node (os reboots) and even live migrations of vms require a “stun” which stops the cpu for a period of time(a second?). These details I listed in parentheses are often omitted from the sales pitch. The reaction varies with each application. As the load increases on a system these adverse reactions can increase as well.

If you want to achieve 5 9s for your platform, you have to move the redundancy logic up the stack. Catch errors, wait and retry.

stack

Yes, use the tools you are familiar with lower in the stack. But don’t build yourself a nest at every layer in the stack, understand the big picture and apply pressure as needed. Just like you wouldn’t jump on every possible new shiny security feature, don’t jump on every redundancy feature to avoid nestfrastructure.

 

vMotion, an online operation?

There are two types of vMotions, storage and regular. Storage vMotion moves VM files or a single .vmdk file to another datastore. The regular vMotion moves the VMs memory from one host to another and then stuns the VM in order to pause processing so the new host can open the file and take ownership of the VM. Today I’ll be referring mostly to the regular vMotion.

These are both fantastic technologies that allow for rolling upgrades of all kinds and also the ability to load balance workloads based on usage. The Distributed Resource Scheduler (DRS) runs every 5 minutes by default to do this load balancing. Datastore clusters can be automated to balance VMs across datastores for space and usage reasons. Like I said, these technologies are fantastic but need to be used responsibly.

“VMware vSphere® live migration allows you to move an entire running virtual machine from one physical server to another, without downtime” – http://www.vmware.com/products/vsphere/features/vmotion

That last little bit is up for debate. It depends on what your definition of downtime is. This interesting historical read shows that vMotion was the next logical step after a pause, move and start operation was worked out. Even though VMware is now transferring the state over the network and things are much more live, we still have to pause. The virtual machine memory is copied to a new host, which takes time, then the deltas are copied over repeatedly until a very small amount of changed memory is left and the VM is stunned. This means no CPU cycles are processed while the last tiny little bit of memory is copied over, the file is closed by that host and the file is opened on the new host which allows for the CPU to come back alive. Depending on what else is going on, this can take seconds, yes that is plural. Seconds of an unresponsive virtual machine.

What does that mean? Usually in my environment, a dropped ping, or maybe not even a dropped ping but a couple slow pings in the 300ms range. This is all normally fine because TCP is designed to re-transmit packets that don’t make it through. Connections generally stay connected in my environment. However, I have had a couple strange occurrences in certain applications that have lead to problems and downtime. Downtime during vMotion is rare and inconsistent. Some applications don’t appreciate delays during some operations and throw a temper tantrum when they don’t get their CPU cycles. I am on the side of vMotion and strongly believe these applications need to increase their tolerance levels but I am in a position where I can’t always do that.

The other cause of vMotion problems is usually related to over committed or poorly configured resources. vMotion is a stellar example of super efficient network usage. I’m not sure what magic sauce they have poured into it but the process can fully utilize a 10Gb connection to copy that memory. Because of that, vMotion should definitely be on its own vLan and physical set of NICs. If it is not, the network bandwidth could be too narrow to complete the vMotion process smoothly and that last little bit of memory could take a longer time than normal to copy over causing the stun to take longer. Very active memory can also cause the last delta to take longer.

Hardware vendors advertise their “east-west” traffic to promote efficiencies they have discovered inside blade chassis. There isn’t much reason for a vmotion from one blade to another blade in a chassis to leave the chassis switch. This can help reduce problems with vMotions and reduce the traffic on core switches.

In the vSphere client, vMotions are recorded under the tasks and events. When troubleshooting a network “blip” the completed time of this task is the important part. Never have I seen an issue during the first 99% of a vMotion. If I want to troubleshoot broader issues, I use some T-SQL and touch the database inappropriately. Powershell and PowerCLI should be used in lieu of database calls for several reasons but a query is definitely the most responsive of the bunch. This query will list VMs by their vMotion frequency since August.


SELECT
MAX([VM_NAME]) as 'VM',
count(*) as 'Number of vmotions'
FROM [dbo].[VPXV_EVENTS]
WHERE
EVENT_TYPE = 'vm.event.DrsVmMigratedEvent' and
CREATE_TIME > '2014-8-14'
GROUP BY vm_name
ORDER BY 2

This query can reveal some interesting problems. DRS kicks in every 5 minutes and decides if VMs need to be relocated or not. I have clusters that have DRS on but don’t ever need to vMotion any VMs because of load and I have clusters that are incredibly tight on resources and vMotion VMs all the time. One thing I have noticed is that VMs that end up on the top of this query can sometimes be in a state of disarray. A hung thread or process that is using CPU can cause DRS to search every 5 mintues for a new host for the VM. Given the stun, this isn’t usually a good thing.

IMHO, a responsible VM admin is willing to contact VM owners when they are hitting the top of the vMotions list. “Don’t be a silent DBA.” That is some advice I received earlier on in my career. Maintenance and other DBA type actions that can be “online” but in actuality cause slowdowns in the system that other support teams may never find the cause for. The same advice can be applied to VMware admins as well.

 
Leave a comment

Posted by on September 16, 2014 in Virtual

 

SQL Saturday Columbus Recap #SQLSAT299

I decided to take a brief trip down memory lane for this recap.

http://www.sqlsaturday.com/84/schedule.aspx Attendee, Volunteer
http://www.sqlsaturday.com/160/schedule.aspx Attendee, Volunteer
http://www.sqlsaturday.com/204/schedule.aspx Attendee, Volunteer, Speaker
http://www.sqlsaturday.com/256/schedule.aspx Attendee, Volunteer
http://www.sqlsaturday.com/292/schedule.aspx Attendee, Volunteer Coordinator, Speaker
http://www.sqlsaturday.com/299/schedule.aspx Attendee, Speaker

Some of those session titles are amusing after 3 years, especially anything that has “new” in the title. That first SQL Saturday in 2011 was pretty special. I realized that volunteering helped my more introverted personality get a chance to network with others.

At the Kalamazoo84 SQLSat I was having a conversation about the pains of double hop authentication and another speaker asked me what my session was about, but I was only a volunteer. I didn’t think I was ready to speak(I wasn’t). That person thought for some strange reason I knew my stuff and suggested I whip up a session and try it out. It was some advice that I remember but didn’t act on for quite a while. This was also another interesting question because it is a total bait question. It is something that the speakers are thinking about and is a great icebreaker.

The Detroit SQL Saturday in 2013 was the first time I was a speaker at a SQL Saturday. I had found my niche that I was passionate enough about to actually enjoy getting up in front of people and presenting. The basic SQL topics are great but I didn’t feel I had enough ground breaking experience and depth on any of those topics to present until I found a way to make security interesting. It was my in because nobody else seemed to be talking about it. I saw other presenters doing a bit of cross training into virtualization and storage so I figured a bit of offensive security and networking concepts would be totally acceptable. A couple user groups of practice and I was ready for a larger audience. I packed a smaller room full of very interested and thankful people. I’m glad the first time went well because it was very nerve racking. I may not have continued to challenge myself in this way had it went poorly.

Kalamazoo, Detroit and now Columbus. These SQL Saturday conferences have all been very rewarding. I always learn something, meet at least a few new awesome people and give as much back to the community as I can. Getting a reasonably sized, semi-interested audience is priceless to me when I am trying to practice my presentation and public speaking skills. There is only so much I can teach my wife about computers until she murders me in my sleep!

My session in Columbus went well sans one whoopsie. I have learned I need to get an accurate start and stop time from multiple sources. I started my session at 3:30 thinking the 3:34 was a typo in the handout. Unfortunately it was a typo but in the other direction and was supposed to start at 3:45 according to the website. I started at 3:30 and someone kindly got up and shut the door. A little less than 10 minutes in I noticed a small crowd peeking in the small glass part of the door and someone finally opened it. This nearly doubled the people in attendance so I started over but didn’t show the video ( https://www.youtube.com/watch?v=c36UNSoJenI ) again. Anyways, the slides and demo scripts are posted on the schedule link above.

I decided to attend sessions at this SQL Saturday. Below are the sessions I attended. I particularly liked Kevin Boles SQL Injection session because of the hands on approach. He developed a great demo that showed several different methods of attack and defense. It is also very complimentary to my session because I avoid that particular topic for the most part.

299_attendance

Also, I would like to thank Mark https://twitter.com/m60freeman for organizing a great speaker dinner and event. I’m happy they were able to give me the opportunity to present.

I sometimes imagine where would I be today had I not started attending user groups and events like SQL Saturday. I would most likely be a mess. I have supported an environment that has grown from ~15 SQL servers 5 years ago to almost 200. Without the skills and drive to make SQL Server the best possible platform at my organization I’m not sure I would have as much responsibility. Business users would have run away instead of diving into SQL Server. I imagine myself still being a “DBA” but constantly putting out fires instead of scripting our build and auditing processes. I imagine myself never having the time to research storage and virtualization and becoming confident enough to take on these new administration challenges. I definitely would not have begun the journey of improving my public speaking skills that have improved my overall quality of life. The place without PASS in my life is a scary place.

 
Leave a comment

Posted by on June 24, 2014 in PASS

 

Tags:

I’m Speaking in Columbus June 14th

Free training, free networking and only $10 for lunch. Best you cancel your plans for June 14th and find your way to Columbus, OH.

More details can be found here: http://www.sqlsaturday.com/299/eventhome.aspx

This presentation is similar to the presentation that I delivered at SQL Saturday Detroit.

Hacking SQL Server – A Peek into the Dark Side
The best defense is a good offense. Learn how to practice hacking without going to jail or getting fired. In this presentation we’ll be demonstrating how to exploit weak SQL servers with actual tools of the penetration testing trade. You will learn why the SQL Service is a popular target on your network and how to defend against basic attacks.

Hope to see you there!

 
Leave a comment

Posted by on May 29, 2014 in PASS

 

SQL Saturday Detroit 292 Recap

And it is all over way too soon.

I normally don’t like to whine and complain to anyone other than my wife and mom when I am sick, but man, was I sick leading up to this SQL Saturday. I picked up some kind of stomach flu, probably from Vegas the week prior at EMCWorld. The thought crossed my mind about warning people that I might be unable to make it if I got any worse. Fortunately, the sickness passed by Friday morning and I was able to muscle through.

Volunteer Coordinator

Volunteer coordinator sounds fancy but just getting a list from the coordinator and lots of communication. I decided to use http://www.volunteerspot.com that worked well for the Bsides Detroit conference I helped at the previous summer. You can sign up for free and setup tasks lists on different days. Then you simply paste in your list of volunteer emails and they can choose what items they want to volunteer for. Room proctors, registration desk slots and a few miscellaneous tasks added up to 38 tasks the day of the event which was a bit of a bear to enter. Friday, I had one 3 hour task to make sure I had a list of people to help setup the rooms and stuff the bags.

Allowing the volunteers to pick their own tasks is something that I didn’t think would work out that well but actually did. It is much more efficient just to auto-pick all the slots and then do any trades later, but with the help of volunteer spot it was easy to allow them the chance to pick their own so they could attend sessions they wanted to attend. This is the second year so we had some experience on the team which helped this process go smoothly. Two days before the event, while I lay sick in agony, I filled the last 5 or so tasks.

One thing I could improve on is using the report feature they provided. I didn’t think there was one, but there is a giant button on the left side of the UI. Using my giant phablet proved to be a bit more cumbersome than I had anticipated to pull up a list of tasks to find out who was doing what. Printing off that task list and actually taking attendance first thing the day of the event is something I would recommend.

Presenter

I’m writing to you today nearly a week without coffee or any other substantial form of caffeine. My mental state is surprisingly sound considering I was up to a steady 4 cups a day. I don’t usually start the caffeine intake until around 9 in the morning which was when my presentation started. I was feeling well and no headaches but I did get a couple comments that the presentation was slow at the start which may or may not be related.

I chose to try something I wasn’t sure would work out too well at the start. I showed a 4 minute video from BBC about the honey badger. Not the crazy and dated honey badger doesn’t give a crap video but one I find hilarious and shocking from BBC. It shows how honey badgers escape their confinement no matter how hard the zookeeper tries to keep them cadged. I watch this and can’t help compare hackers to honey badgers. Also, getting that camera in the pen to show how they escape is what I am trying to achieve by showing people how SQL Server is hacked. I intended to use this metaphor throughout my presentation, but I sticking forgot all about it. O well, better luck in Columbus :]

This was the largest room I have spoken to yet with roughly 60 people. The chalkboard was a nice addition which allowed me to illustrate the network which is something I am still working out. I was happy to find out I got the larger room because the previous year the 40 person room was completely packed. I am satisfied with how I did and am really happy to get a large majority of positive feedback and some really good advice from the attendees. My complex demos that require typing all worked and the projector didn’t have any issues so I would say I lucked out.

Attendee

Even though the event was in the same place as last year we got an upgrade in the classrooms that were available to us. Now furnished with chalkboards and I think we had more seating than the previous year. My session was the first of the day and then Grant Fritchy’s followed in one of the larger rooms. I was in a zombie state so I settle in to the nearest seat and vegitated for a bit. The session was Titled Building a Database Deployment Pipeline and covered reasons to improve and team up database deployments with code deployments. It didn’t really get into the how, other than mention a few tools that I have heard of but am unfamiliar with.

Lunch was in another building which gave me a chance to walk by the vendor tables. They were a bit out of the way and seemed cramped. I wonder what we may have done better in this area. Had the vendors been setup at the beginning of the day that would have been the prime time to get most attendees passing through but from what I hear that wasn’t the case.

I got to see David Klee’s hitch impersonation after lunch. Not sure what happened but he had a terrible looking allergy attack. With some help from Tim Ford and Grant Fritchy he continued on with his session, “How to Argue with Your Infrastructure Admins – and Win”. I do like stories of strife, especially when they don’t involve me. I’m not sure I really got what I expected out of the session but it was enjoyable.

Grant’s session on execution plans is something every SQL Saturday needs. T-SQL and how database internals work can be explained much easier with the GUI view of a query plan. He has some really good advice on how to read query plans.

I walked in late to the T-SQL For Beginning Developers session and sat next to my wife who is a T-SQL absolute beginner. We both felt it was a little too advanced for her. She does have a small amount of experience writing code but doesn’t have any database experience. A lot of the nuances that were covered were not that valuable to her or I. Inserts, Updates, Deletes and Selects with some joins should have been covered more. I see so many 3rd party software products that doesn’t take advantage any functions because they want to support all the major database platforms. The session missed my expectations.

Wrap-Up

We were expecting a higher turnout this year because the previous year had a bit of a freak snowstorm. But the initial estimates put us a little under last year in attendance. I feel I could have done a better job promoting the event, especially at my place of employment but it just wasn’t in the cards. Overall, the event went very well and I look forward to Columbus and maybe a West Michigan event later this year.

 
Leave a comment

Posted by on May 23, 2014 in PASS

 

#SQLSatDet has made the front page

The short list of upcoming events now includes SQL Saturday #292 in Detroit http://www.sqlsaturday.com/.

Free training, free networking and only $12 for lunch. Best you cancel your plans for May 17 and find your way to Lawrence Technological University.

The speakers who submitted by the original deadline have been confirmed for at least one session. That means you will have a chance to listen to me talk about SQL Server Security in my Hacking SQL Server session. I really enjoyed speaking last year at this event and look forward to this years event including all the pre and post activities.

Here is my recap from last year: https://nujakcities.wordpress.com/2013/03/20/sqlpass-sqlsatdetroit-recap/

 
Leave a comment

Posted by on April 10, 2014 in PASS, Security, SQL Admin

 

Tags:

Toying with In-Memory OLTP

In six days the bits for SQL 2014 RTM will be available for download. I decided to fling myself into its hot new feature of In-Memory OLTP with the CTP2 release. I’ve attended one user group that gave an overview of the feature set ( Thanks @brian78 ) but other than that I have not read much technical information about In-Memory OLTP.

One advantage point that seems to pop up in literature surrounding the release is the ease of implementation. Not all tables in a database have to be In-Memory and a single query can seamlessly access both classic disk based tables and In-Memory tables. Since the product isn’t released yet, the information available on the featureset is heavily weighted towards sales. I wanted to see if achieving the 5x-20x performance boost was really as easy as it sounds. Instead of my usual approach of collecting lots of information and reading tutorials, I decided to blaze my own trail.

The first thing to do is create a new database. I noticed a setting that I heard referenced in the overview called delayed durability.

delayed_durability

Scripting the new database out in T-SQL also shows this new setting. I’m assuming this will make things faster since they don’t have to be persisted to disk right away.

delayed_durability_script

Before I run that script I decide to poke around a bit more. I see some In-Memory settings over on filestream. I’m not sure if that is a necessary requirement or not, but I am going to add a filegroup and file just in case.

filestream

file_stream_script

Now that the database is created I want to create a table. There is a special option in the Script-to menu for In-Memory optimized tables. I’ll create a few dummy columns and try to run it.

first_error

There seems to be a problem with my varchar column. “Indexes on character columns that do not use a *_BIN2 collation are not supported with indexes on memory optimized tables.” Well that is unfortunate, I suppose I will change the collation in this test but that won’t be easy in real life.

collation

After changing the collation I am able to create my memory optimized table.

in_memory_test2_success

I wondered if there would be any way to tell in my query plan if I’m actually optimized. It doesn’t appear so…

index_seek

Was that a 5x performance boost?? I’m I doing it right?? Not sure, but for now I need to take a break.

I’m hoping ISVs start supporting this feature but it might be a lot more work than advertised. After getting that error I found a list of many things that are not supported on these tables and in the compiled stored procedures. http://msdn.microsoft.com/en-us/library/dn246937(v=sql.120).aspx

This list does not encourage me to blaze new trails and start testing this as soon as it comes out. I prefer to wait a bit and let the other trail blazers blog about the issues they have.

 
Leave a comment

Posted by on March 26, 2014 in SQL Admin

 
 
Follow

Get every new post delivered to your Inbox.

Join 155 other followers