Home > VMware, VMworld > VMworld US 2013: The Day 3 Buzz

VMworld US 2013: The Day 3 Buzz

August 29th, 2013

VMworld continues into day 3 with a noticeable slowing down of the average attendee walking pace after the festivities of the nights before! There was no keynote today with sessions starting at 8am.

I attended VMware Horizon Suite, Innovations for Storage Scalability, Performance and Data Protection by Christopher Wells and Chris Gebhardt from NetApp.

Christopher started by saying he doesn’t like load generation tools as they don’t represent reality. Vendors talk about IOPS with massive, seemingly impressive 1,000,000 IOPS figures but that doesn’t represent workloads in the real world.

All VDI decisions have implications for storage, using automated or manual pools, floating or  dedicated user assignments, linked clones, full clones, NetApp VSC clones along with all the user profile and workloads data. All these ways to create VMs and handle user data have an impact on storage and these need to factored into sizing and performance decisions. Cloning can hurt you if you don’t understand what is happening. hypervisor clones (snapshots) are the least efficient as it is 2 reads for every request as you need to read from two files and for writes, it is three writes including the metadata. All this lands up being a lot of writes and reads, 10 guest IOPS = 28 IOPS to storage. This must be considered for linked clones, its not a 1 to 1 relationship between guest IO and storage. More efficient to not copy any data and provision with storage VAAI.

Most IOPS generated are often actually user workloads and user profiles rather than the VDI image itself.

View Storage Accelerator from VMware is a host based memory cache for all types of desktops and is works transparently to the users and applications.

Christopher then went on to talk about the NetApp Virtual Storage Tier which alleviates boot and login storms. This uses a hardware Flash Cache or Flash Pools for platforms that don’t support Flash Cache.

NetApp suggest using separate volumes or Storage Virtual Machines (SVM) to separate the storage for VMs, corporate apps and user data. Use different storage capabilities and possibly disk types for each, such as not de-duping temporary data. All these SVMs for separate IOPS, capacity and availability can be managed under Cluster ONTAP.

Assessments and sizing are important for Horizon View, PoCs may not scale linearly. An example is the unexpected “lunch storm” which is when users start doing personal things during lunch and watching YouTube videos which isn’t likely captured during a PoC or with standard load testing tools. NetApp does partner with Liquidware Labs for a sizing tool.

Chris Wells then talked about User Data in Horizon Workspace. He said NetApp is a good fit for user data as it allows more users than competitors storage due to de-dupe, non disruptive operations and backup and recovery which all fits very well with Horizon Data.

NetApp will shortly have a beta coming out for SnapCreator for Horizon Workspace. I was hoping for more information about how Horizon Data integrated with NetApp for backups, recoveries & DR so will need to do some reading to work this out. Horizon Data runs as a virtual appliance which stores its data on local VM disks so it is going to be interesting to work out how this VM disk file can be managed but in a way to recover file level data.

Here’s a view of the outside chill out area.

005 004

VMworld TV: Meet the Team Behind the vCloud Hybrid Service

I then attended an EMC session from Chad Sakac: Leading Edge: Evolving to A Software-Defined Data Center. Chad is always an excellent speaker, certainly passionate and engaging with the audience. He talked about the 5 major disrupters in IT at the moment: low cost CPU horsepower, virtualisation of all sorts of things, flash, convergence (compute, storage & networking) and cloud.

These disrupters obviously have an impact on storage architectures and EMC is looking at hybrid arrays, all flash arrays, seas of storage, converged models, data protection and automation.

Chad then showed some of the new stuff EMC is working on like MCx which utilises the new Intel CPUs for 4x better performance and being able to scale out data processing across many more cores. There is an EMC announcement on the 4th September which should have more information.

Chad also asked the audience how many people are using NFS and then said that many more people should. This is something I’ve been banging on about for years., The inherit scalability and simplicity of NFS is right for most workloads and once you see how simply it is to run and the huge flexibility you get you will never want to attach a LUN again!

He then demoed a ScaleIO system which EMC acquired which uses local disks to create a SAN at much lower cost. Similar to VSAN but I would think would be further integrated to EMC products so they can have an offering at all price and scale points.

Chad did a small vVolumes demo which is something that was demoed at VMworld 2012 but then pulled from the 5.5 release to likely make a bigger impact to a possible vSphere 6 release in 2014.

VMworldTV Meets Frank Denneman for an Exclusive Look at His New Book

Next up was Innovations in vMotion: A Technical Preview presented by Jennifer Wu and Gabriel Tarasuk-Levin from VMware.

This session was looking at the future of vMotion and started with a recap of the functionality that has been added from Multi-Nic vMotion in vSphere 5.0 along with Stun During Page Send (SDPS). vSphere 5.1 added vMotion without shared storage.

VMware is looking to do vMotions across vCenters, across vSwitches and long distance.

Jennifer showed a demo of a Long Distance vMotion of a 10Gb Disk, 1Gb Memory VM from Palo Alto to Bangalore over a 250ms latency link which completed in 3 and a half minutes, no pings dropped very impressive. The VM storage was svMotioned as well during the move.

In order to achieve this some future vCenter will also be required as the VM UUID needs to be maintained across the two vCenters, along with moving the historical data and events as well as HA properties and DRS affinity rules. You will be able to vMotion across any vCenter in the same SSO domain with the GUI or use the API for multi SSO domains. You will need 250Mbps per concurrent vMotion operations for your vMotion network.

Some use cases were shown. Multi-site capacity utilisation to be able to shift workloads to adjust capacity between data centers with shared or without shared storage. Disaster avoidance by moving workloads over live in advance of a disaster. There will be further integration between long distance vMotion and SRM as different DR mechanisms.

The other interesting use case is moving workloads into vCloud Hybrid Service using long distance vMotion. This will work across different SSO domains, will require multi-tenancy built into the vMotion protocol to separate and secure traffic as well as per-VM EVC to hide CPU hardware heterogeneity.

I asked about compression and de-dupe and Gabriel sais this may be a possibility from VMware or also third party WAN optimisers which could use their technology to enable faster long distance vMotion with less bandwidth utilisation.

VMworld TV Meets Jim Silvera, an Expert on vCentre Operations

I then attended a Group Discussion on VSAN with Cormac Hogan and R&D engineers, Christian Dickerman and Cristos Karamanolis.

IMG_2287

There were no shortage of questions as VSAN is one of the hot topics of VMworld 2013 and the presenters were extremely knowledgeable and answered all questions extremely thoroughly.

Plenty was discussed, most of the questions were around deployment, availability and disk choice, latency and recovery. I had a chance to look at VSAN before it was announced and wrote a post What’s New in vCloud Suite 5.5: Virtual SAN (VSAN) so no need to repeat. Other things that were mentioned in the session that I didn’t know was that the upcoming big storage change, VVols was a way to encourage 3rd party storage companies to implement what VMware has done with VSAN so if you want to see how VSAN is architected, look no further than VSAN. Policies is what makes VSAN more than a VSA so look at my post for details as this is actual where its power lies, not just in presenting local disk as a VSA.

You can’t do older style MS clustering on a VSAN where a SCSI reservation disk is required but can use the newer MS clustering which uses networking to determine what’s available. Metro.stretched clustering is not supported.

VSAN is in public beta and is planned to be shipped with the first update to vSphere 5.5 in Q1 2014. Interestingly, it was mentioned that Virsto which VMware acquired was bought for its great snapshotting capability which they are wanting to incorporate into VSAN in the future. Other things mentioned were VMDK encryption is being looked at. Full support to VSAn planned for the next release of Horizon View.

Martin Casado Speaks to VMworld TV

Next up was Best Practices with Software Defined Storage by the storage frenemies, Vaughn Stewart from NetApp and Chad Sakac from EMC.

They went though some advice for current storage. Always read the documentation from VMware and your storage vendor.

Always keep it simple with the following:

  • Use 16TB NFS or VMFS 64TB
  • Only use RDMs when required by application or vendor.
  • Use datastore clusters and SDRS and match service levels on all datastores in a datastore cluster, disable SDRS IO metrics if the array has storage tiering capability
  • Use storage efficiencies to reduce costs, thin LUNs & vols, dedupe, compression, snapshots, clones etc.
  • Use automated storage services, auto tiering, auto grow/extend.
  • Avoid jumbo frames for iSCSI & NFS.
  • Use vCenter plugins to deploy with optimal settings and make things simpler and give you far better visibility into your array while using RBAC to keep your storage team happy

They went through the principles of what “software defined” means:

Decoupling and abstracting control and policy (control plan) from physical stuff that does work (data plane). Leveraging intelligence/integration of infrastructure including having some data plane stuff done in software on commodity hardware (VSA) and lastly but most importantly having programmable infrastructure APIs that automate everything.

Then they went on to why we need this. Reducing infrastructure “fragility” by abstracting and reducing operational complexity and increasing agility to allow users and app owners to get services faster.

Then it was demo time, Openstack with vSphere and NetApp DataONTAP,  EMC ViPR with vCenter Orchestrator, NetApp Project SHIFT which uses FlexClone to migrate VM disk files between hypervisors proprietary disk formats and EMC moving workloads into vCHS along with App demos with SQL, Oracle and SAP.

Lastly was a peak into the future. The major observation is that data growth is exponential dominated by object storage. There are emerging storage options for transactional workloads using a mix of clouds, distributed DAS storage, flash and converged infrastructure. Nothing new, was hoping for some sneak peaks but i suppose they are frustrated waiting for VVolumes to be available.

VMworldTV: Meet Mark Leake, Director of Product Marketing at VMware

During the day I also spent some time in the Solutions Exchange. I wanted to see Mellanox to talk about low-latency networking where they have a product which can pass through virtual networking to hardware at very low latency. I then spoke to Juniper to see more about NSX, didn’t get much of a demo but was good to dram out traffic flows and see how the NSX virtual world will connect to the physical world. I also spoke to Riverbed as I wanted to find out what was new with WAN acceleration.

VMworld then wound down before the official party at AT&T Park, the local  baseball stadium. The headline bands were Train and Imagine Dragons and the party was fantastic. Much more interesting with food, drink, acrobats, games and fireworks! Definitely more fun than last year which was held at the convention center.

IMG_2303

 IMG_2305 IMG_2300 IMG_2302 IMG_2304

Categories: VMware, VMworld Tags: ,
Comments are closed.