RSS

Monthly Archives: November 2016

PASS Summit 2016 Recap

Introduction

I have a fairly massive bundle of lanyards in my drawer at work. vmworld, ciscolive, emcworld, sql connections, sql rally, grr con, bsides detroit, a bunch of sql saturdays and more. All are great learning experiences and if I didn’t have to do actual work, I’d love to attend more conferences. I definitely plan to attend the PASS Summit next year and many years to come.

All employers should commit to a couple weeks of training for all associates. Most new people can learn from the experience of more senior members of their team but both the trainer and trainee need to be available for this to work. Classroom training from companies like New Horizons can come in handy for those associates who don’t like to travel or for companies that don’t have a good mentor in that field. Conferences that require travel can sometimes cost 3-4k but with the right person, are totally worth it. In fact, sending multiple associates to the same conference gives them a chance for teambuilding somewhat outside the stuffy bounds of the office.

Being a PASS chapter leader grants me free admission to the 3 main conference days. That got my reported expenses down to 2k, which the majority of that being flight, hotel and cabs. Chapter leader is a time consuming job, but very rewarding in many ways including this free ticket.

The Summit stands out from all other conferences I’ve been to for a couple reasons. First, the content is mostly not vendor content and more relevant to the “how” part of my job. Tips and tricks instead of products are explained in many sessions. We as attendees are mostly not getting sold a particular product because a majority of the speakers are community speakers. Sure, definitely lots of fans of SQL Server and not much mention of Oracle/MySQL/Postgresql/DB2/etc.. but I think you get the point. Second, it has a feeling of a family reunion for a lot of people. We have that crazy uncle, annoying younger brother, CEO cousin, and a poor bunch of Cleveland Indian fans. Networking is hard but the most beneficial aspect of the conference. There seems to be a greater number of people willing to meet new people at PASS than any other conference I’ve been to. There are also many other PASS events that give you an opportunity to reconnect with these people.

Without further ado, here is my play-by-play.

Monday

Alaskan Airlines (23” rule is the largest free carry on that I am aware of)

I may have liked to attend a pre-con this day but I would have had to fly out Sunday.

Tuesday

Registration

Chapter Leader Meeting

I didn’t know this last year but this is a mandatory meeting for chapter leaders who get the free Summit ticket. Lots of good ideas discussed for my local chapter (http://glass.sqlpass.org) in regards to sponsorship, speakers, and locations.

Welcome Reception

A single drink ticket was provided and cash bar after that. Light food and very tasty. Great chance to network.

Wednesday

Keynote

Discussed HTAP, hybrid transaction analytical processing. Basically the reason we do ETL is to avoid too much stress on the primary system. Some of this can go away with multiple object types that store the same data in different ways. In-Memory OLTP table with a Cluster Column Store Index can serve both reports and transactions at the same time. This is interesting, but the reason CCSI was read only in version 2014 was because updates to this type of structure are slow, updates can happen in version 2016, but I’m not sure how fast they are.

Microsoft R can be installed on SQL Server or on a standalone server. They have promised huge performance gains through multithreading and scale out options. Not sure if there is a licensing benefit for us to go standalone, built into SQL, or just keep using open source R.

Azure IO bandwidth is 4x any of the other cloud competitors. SQL Server running on linux was demoed, even works with AlwaysOn availability groups. Isn’t that appealing to me and won’t be GA until later 2017.

Architecting SQL Server for Mission Critical and High Performance (HPE Vendor Session)

Discussed some superdome hardware architecture. One interesting trend is using NVDIMMs. This non-volatile memory can be used to harden log records. DAX bypasses the storage stack, but can still be configured as a hard drive in windows. SQL can take advantage of this for a secondary small log file with a trace flag and eliminate LOGWRITE waits. The size (8GB) and availability of these devices is prohibitive.

Inside SQL Server In-Memory OLTP

Bob Ward has a fast paced, in-depth speaking style that I like. I attended a couple of his sessions and he made his slides available at http://aka.ms/bobwardms

The goal of In-Memory OLTP was to be 100x faster that previous versions. It sounds like they got about 30x faster, mostly due to the removal of latching. The important durability differences between an In-Memory table and a regular engine table was discussed. No UNDO process is required for durability because there is no concept of write ahead logging. These transactions are just dropped on failure. 701 is the error we will get if we don’t size memory properly to the table we want to enable for In-Memory functionality. There are management objects, but they don’t show up in the database system catalog since they are in-memory tables and recording that information in regular tables would slow transactions down. There are two main types of indexes, hash and range. Bucket sizes need to be determined for hash indexes but they are much faster if configured properly. XTP_OFFLINE_CKPT is a wait that is reading the transaction log and making decisions to flush to disk. This isn’t a wait that affects transactions but runs asynchronously.

Lightning Talks

Vendor Reception

Nimble Storage and Pure Storage were there, not sure if any other storage vendor was. Talked to AWS about some of the things they do. Providing a database as a service separate from a database server as a service might be worth us looking into. VMware was there and working the booth was one of the speakers from EMCworld and CiscoLive. I got some information about some of the max VM sizes in the wild and found out it might be worth trying to put Actuarial SQL Servers as a single VM on a host.

Thursday

Keynote

It was refreshing to see several options in cloud data warehousing presented. Dr. David Dewitt compared a roll your own DW with Amazon’s redshift and Microsoft’s SQL DW and Snowflake. He talked about Oracle RAQ and Teradata. I liked the comparison of different vendors with a fairly open minded approach at a conference that focuses on one vendor. He explained some of the pros and cons of any scale out data platform.

SQL 2016: It just runs faster

Clock speed hasn’t improved much in 10 years but the number of cores per processor has gotten really high. Several pieces of the SQL engine needed to be re-written to process more in parallel. Parallel redo, parallel insert select, automatic scaling of number of threads and more partitioning allows for more parallel operations in 2016. We need to work on our ISVs to support later versions of SQL Server if we want to improve overall user experience.

Recovery Models Made Simple

Great session that explains recovery and a lot about the SQL Server log file. Active transactions that affect space used in the log file was explain very well. Virtual log files and their reuse was explained. Fn_dblog(null,null) can be used to explore more information stored in the log file.

Monitoring and Alerting of Availability Groups

Picked up a lot of useful tips and tricks for the couple AGs we have in production. The SSMS dashboard is pretty useful for monitoring performance. Reviewed the basic steps of an AG transaction. Overall makes me more comfortable with 2016 AGs even though we try to avoid them since it is a copy of the data and a windows cluster on top of a vmware cluster.

Vendor Session: Storage Testing

Learned about a fairly new tool VD bench which can simulate more realistic virtual workloads. With dedupe&compression inline, synthetic tools do not provide a while lot of useful information. Argenis and David and awesome speakers.

Microsoft R Sessions

I attended a SQLCAT session and introduction session to try to get a handle on how best to manage R for Actuarial. The open source R doesn’t operate in parallel and Microsoft has created some open source ScaleR functions (rx*) that are comparable but operate in parallel. I saw an interesting demo of predicting the date that a database is going to run out of space with R. That might be a good way to practice with usage of R that is built into SQL Server. There wasn’t much around memory management with R built-in other than I learned it can spawn a lot of extra processes outside of the SQL process.

Advertisements
 
Leave a comment

Posted by on November 21, 2016 in Uncategorized

 

That Time I Made SQL Server Too Fast

Things break. We don’t usually intend them too, they just do. Sometimes, maybe we do intend them to break. I’m still suspicious about cars breaking on purpose so we will buy a new one. Other times, they break in ways we don’t expect, like these times I made queries respond too fast.

For example, several years ago I was a fairly new DBA. I wasn’t used to the enterprise class roasting one gets when things are not quite right. We had a multi tier application, DB, app server with a desktop application that started to respond with errors after a hardware upgrade of the SQL Server.

Darn. We tested everything we could. We reviewed the code and sent it through upgrade analyzer. We restored production backups and tested those on the new hardware.

But it wasn’t enough load for the system to break in a peculiar way. The server had a limited number of threads to deal with a response from SQL. Seriously, hard-coded value of 3 thread to go to SQL, get a visual basic script stored in the database, and execute that on the app server. I kid you not, sometimes the sausage factory isn’t worth the tour.

This dynamic script was now loading up these 3 threads all day, and it was a lengthy script so sometimes it would hit the other hard coded timeout values and the users would start to get errors in large numbers instead of just waiting a little longer that usual.

We tried, and tried some more to work with the vendor. But in the end I had to make SQL slower so the app could keep up. We moved this SQL Server back to old hardware until we could also upgrade the app server.

The most recent time I made SQL too fast was a couple days ago. Another third party vendor who is XML happy. Surprisingly, the problem had nothing to do with XML.

Re-wind several years ago, I convinced a co-worker that SQL 2012 would be a great choice for their application even though the vendor didn’t technically support it yet. Who cares! Microsoft supports 2012, not this vendor. We tried and tried to get the thing to work but the application just cooperate with SQL 2012. Fortunately, we didn’t have any data so we just reinstalled the app and used SQL 2008R2. I conceded defeat, probably too soon and rolled it back after delaying their upgrade a couple weeks.

Back to a few days ago, that same co-worker fell for my tricks again with a different app. This time SQL 2016, because Bob Ward says it just runs faster ( http://www.sqlpass.org/summit/2016/Sessions/Details.aspx?sid=51194 ). Let me tell you how hilarious this error is as briefly as I can.

This XML happy application has a boatload of tables. Fairly normalized but lots of chatter with XML back and for the the application even with just a single user. We are testing a new version and a single user was getting a primary key violation. Consistently happens on SQL 2016 so I had him send it back to the vendor, even though he was threatening physical violence. Given our prior experience of jumping on a new version, I don’t blame him. No way could this be SQL 2016, primary keys are are similar across all versions right?

Vendor comes back with some confused email. “We can’t recreate the issue.” Blah blah, co-worker convinces me to give him access to create the database on a SQL 2012 instance and bam, no error. The beatings start. Even though he knows its bogus, still I have bruises and my fear of the new shiny is reinforced.

Are you ready for the shocking news? SQL 2016 is fast. Really fast. Even with absolute garbage design and code, its fast. That is how we got primary key violations.

The table we are working with has a primary key. But the interesting part is, it is a composite key meaning there are a few columns that create uniqueness between rows. So two of the columns can be identical across rows, as long as the third is different. Then we are unique.

Well, in an absolutely vomit enduing way, the vendor chose a datetime datatype to create this uniqueness. Well, when several rows come back in less than 3ms, even with a boatload of garbage XML to process, we no longer have uniqueness because datetime isn’t precise enough.

Not sure what they might consider a fix, but I explained to them what is happening. Hopefully, they don’t jump on the datetime2, knee jerk reason and go with an actually unique key.

 

 

 
Leave a comment

Posted by on November 21, 2016 in Uncategorized