Home > ESX, Update Manager, vCenter, vCloud Director, vCOPS, View, VMware, VMworld > What’s New in vCloud Suite 5.5: VMware Virtual Flash (vFlash)

What’s New in vCloud Suite 5.5: VMware Virtual Flash (vFlash)

VMware has announced its latest update to version 5.5 of its global virtualisation powerhouse, vCloud Suite.

To read the updates for all the suite components, see my post: What’s New in vCloud Suite 5.5: Introduction

speed VMware Virtual Flash (vFlash) or to use its official name, “vSphere Flash Read Cache” is one of the new standout feature of vCloud Suite 5.5.

vFlash allows you to take multiple Flash devices in hosts in a cluster and virtualises them to be managed as a single pool. In the same way CPU and memory is seen as a single virtualised resource across a cluster, vFlash does the same by creating a cluster wide Flash resource.

VMs can be configured to use this vFlash resource to accelerate performance for reads. vFlash works in write-through cache mode so doesn’t in effect cache writes in this release, it just passes them to the back-end storage. You don’t need to use in-guest agents or change the guest OS or application to take advantage of vFlash. You can have up to 2TB of Flash per host and all kinds of datastores are supported, NFS, VMDK and RDMs. Hosts can also use this resource for the Host Swap Cache which is used when the host needs to page memory to disk.

A VMDK can be configured with a set amount of vFlash cache giving you control over exactly which VM disks get the performance boost so you can pick your app database drive without having to boost your VM OS disk as well. You can configure DRS-based vFlash reservations, there aren’t any shares settings but this may be coming in a future release. vMotion is also supported, you can choose whether to vMotion the cache along with the VM or to recreate it again on the destination host. vSphere HA also is supported but when the VM starts the cache will need to recreate again on the recovery host.

DRS does not automatically vMotion VMs configured with vFlash to load balance the cluster except when it has to in order to evacuate a host to enter maintenance mode, to fix a DRS violation, a host resource usage is in a critical “red” or one or more hosts is over utilised and VM demand is not being met. This DRS behaviour can be turned off but the idea is to not move around your vFlash configured VMs unless you really have to otherwise the performance benefit will take a hit while the vFlash is moved to another host or has to be recreated.

vFlash is also supported with .OVFs, vApps and templates so you don’t have to configure vFlash resources separately and it can be part of your deployment process.

If you have a vFlash reservation for a VM you won’t be able to power on the VM or vMotion the VM to any host without vFlash support.

vFlash and VSAN are completely separate even though VSAN can also accelerate performance with Flash. Configuring a VM with vFlash when it is running on a VSAN datastore is not supported however you can still run a VM configured with vFlash on a host which is part of a VSAN but running on a separate NFS/VMFS datastore in the cluster.

You also can’t use the same SSDs for vFlash and VSAN but VMware are apparently working on making this possible, perhaps in the future you will be able to logically split/share an SSD and allow VSAN and vFlash to use different parts.

vFlash is an interesting new additional to the vSphere family especially as more high performance applications are being virtualised. Being able to offer better performance as a cluster resource is always a good thing. It will be interesting to see how vFlash compares to PernixData’s FVP which has been recently released. FVP does write-back cache which means it does accelerate writes.

I would also expect traditional storage vendors to write their own flash acceleration technology which may create some interesting possibilities for host based storage flash cache which is in sync with the back-end storage. The world of flash is hotting up!

  1. No comments yet.
  1. No trackbacks yet.

Time limit is exhausted. Please reload the CAPTCHA.