RSS

Monthly Archives: May 2016

EMCWorld 2016 Recap

2016-05-20_20-34-08

This was my second time to EMC World and I enjoyed 2016 just as much at 2014. I ended up signing up for a test and am happy to report that I passed! Most all big conferences like this offer a free or reduce attempt at one of their exams and I chose the XtremIO Specialist for Storage Administrators. I prefer taking the exam first thing Monday. Sure there is a chance I could learn something during the week that might be on the exam but I think it is more valuable to be well rested and not have my mind cluttered with all the new knowledge. Once that was done I had time to get into the keynote.

Opening Keynote
2016-05-20_20-35-04

Seeing Joe Tucci on stage for possibly the last time was a bit like seeing John Chambers at Cisco Live the previous year. Although difference circumstances both crowds seem to respond the same way to seeing their respective leaders pass the torch. Micheal Dell took the stage and had a few interesting things to say

-Dell Technologies will be the new company name
-Dell EMC will be the enterprise name
-EMC is the best incubator of new technology
-Dell has the best global supply chain
-Both companies combine for 21 products in the Gartner Magic Quadrant

There were also some product announcements at EMC World. Unity, which is a mid-tier array that there is an all flash version for $20K. DSSD D5, no pricing here because if you have to ask, it is too expensive. This product addresses some of the IO stack issues and works with new “block drivers” and “Direct Memory APIs” to reduce latency [1]. If 10 million IOPS isn’t enough, cluster ability is coming soon. ScaleIO, Virtustream Storage Cloud, enterprise copy data management (eCDM) and the Virtual Edition of Data Domain were also announced.

RecoverPoint

When setting up my schedule I made sure to get all the interesting looking RecoverPoint sessions booked. Gen6 hardware is out so it is a product that has been around for a while… or has it? EMC didn’t make it easy for us when choosing a product name for RecoverPoint for VM (RPVM). RPA or RecoverPoint Appliance is separate from RPVM. RPVM uses a IO splitter within ESXi in order to provide a potential replacement for VMware’s Site Recovery Manager. I took the hands on lab for RPVM and found it to be pretty complex. It is nice to be able to pick and choose which VMs I can protect but sometimes I want to choose larger groups to reduce the maintenance. Maybe this is possible but it wasn’t very clear to me. My suspicion is array based replication will still be more efficient than host based replication options such as RPVM or vSphere Replication.

RPA has a very interesting development along the lines of DR. Since hearing about the XtremIO, I questioned how the writes would be able to replicate fast enough to achieve a decent RPO. RPA can now utilize the XtremIO snapshots in a continuous manner, diff them, and send only the unique blocks over the WAN. That makes things very efficient compared to other methods. Also, the target array will have the volumes that we can make accessible for testing using XtremIO virtual copies (more snapshots).

DataDomain, DDBoost and ProtectPoint

DataDomain’s virtual appliance announcement was interesting, but I’m not sure I have a specific use case yet. Mainly the need to backup a branch office might come into play but I would want a separate array to host that vmdk. ProtectPoint has volume level recovery features and SQL Server integration now. I can choose to backup a database who’s log and data files are on the same volume and then use the SSMS plugin to do a volume level restore. This grabs the bits from DataDomain and overlays them to the XtremIO using your storage network. I’m not sure how efficient this restore is since I just did the hands on lab but it is very appealing for our very large databases that tend to choke the IP stack when backing up.

DDBoost v3 is coming out in June. This release includes things like copy only backups, restore with verify only, AAG support, restore with recovery for log tails, and also restore compression. I know many DBAs have had a bad experience with DDBoost so far. I have avoided it but v3 might be worth a try.

Integrated Copy Data Management and AppSync

If you have two volumes on XtremIO and load them up with identical data one right after another, you will not see a perfect reduction of data. The inline deduplication rate (ballpark 2:1 – 4:1) will kick in and save you some space but not a lot. If you can implement a solution where you can present the volume of data that is pre-loaded to another host, XVC (writable copies) will save a ton of space. In one session they surveyed several large companies and they had roughly 8-12 copies of there main production database. Consider that being a 1TB database with 2:1 data reduction. That is .5TB used physical capacity plus the change rate between refreshes. Now in a traditional array like VMAX (no compression yet), that is up to 13TB used.

I think one of the goals of the AppSync software is to put the CDM tasks into the hands of the application owner. The storage administrator can setup the runbooks and then create and grant access to a button to do all the necessary steps for a refresh. It sounds like support for Windows clustered drives is in the works with other features being added soon as well.

Deep Dive Sessions

I attended a session titled DR with NSX and SRM. The speakers proposed a virtualized network solution that decouples DR from the physical network. No more L2 extension technology required. The cross vCenter NSX used Locale ID tags for each site to create local routes. The architecture even had some solutions for public website natting to the proper location. I hope the slides get posted because it was pretty deep for me to take in lecture form. The one thing I found fairly comical was the speaker mentioning OTV being expensive as a reason to look at NSX… maybe they have never seen a price for NSX.

The VMware validated reference design was a very good session for me. It validated a lot of our decisions and also got me thinking about a couple new tweaks. HW v11 can now scale to 128 cores and 4TB of RAM for a single VM. VMs are getting 99.3% efficient verses their hardware counterparts. Some hadoop architectures even perform better in a virtual environment. My notes look more like a checklist from this session:

-vSphere 6 re-wrote storage stack (I think for filter integration not necessarily perf)
-check vCenter server JVM sizing
-rightsize VMs
-size VM into pNUMA if possible
-don’t use vCPU hot-add (memory hot add is fine)
-hyperthreading is good
-trust guest & app memory suggestions more than esx counters
-use multiple scsi adapters
-pvscsi more efficient for sharing
-use Recieve Side Scaling in the Guest
-use large memory pages
-look at performance KPIs to determine if settings are beneficial (not just cpu%/ etc..)
-balooning is an early warning flag for paging
-go for a high number of sockets (wide) when in doubt over vsockets or vcores
-watch out for swapping during monthly windows patches
-co-stop is a sign the VM is hurting by having to many CPUs
-clock frequency is latency driver, especially for single threaded ops

In another session I attended was Deployment best practices for consolidating SQL Server & iCDM. There is a webinar the last Wednesday in June for scripting examples and demos.

There were some really good storage performance tips in this session:

-vSCSI adapter – disk queue depth adapter queue depth
-vmkernal admittance(disk.schednumreqoutstanding)
-physical hba – per path queue depth
-zoning multipathing
-shared datastore = shared queue -> disk.schednumreqoutstanding override lun queue depth
-separate tempdb for snapshot size purposes, tempdb is a lot of noise and change that isn’t needed
-don’t need to split data & logs anymore
-still create multiple data files

Conclusion

The EMC brands are evolving at a pace necessary to keep up with the next wave of enterprise requirements. I was happy to be a part of the conference and hope for a smooth acquisition.

2016-05-08_18-30-35

Advertisements
 
Leave a comment

Posted by on May 20, 2016 in Storage