Archive

Archive for the ‘Storage’ Category

Virtualisation Field Day 4 Preview: Scale Computing

January 8th, 2015 No comments

Virtualisation Field Day 4 is happening in Austin, Texas from 14th-16th January and I’m very lucky to be invited as a delegate.

I’ve been previewing the companies attending, have a look at my introductory post: Virtualisation Field Day 4 Preview.

 

Scale Computing has presented previously at Storage Field Day 5, this is their 1st Virtualisation Field Day but they must be keen to get their message out as they’ve already signed up for Virtualisation Field Day 5.

Scale Computing is another member of the hyper-converged space along with Nutanix, SimpliVity and VMware’s EVO:RAIL. They have been shipping hyper-converged as long as SimpliVity but are less well known. Their management provenance is from Avamar (now EMC), Double-Take, Seagate, Veritas and Corvigo.

The writing is on the wall that converged and hyper-converged will be the only way you purchase infrastructure in the future. Why waste time rolling your own? There is therefore plenty of opportunity for a massive market. Scale Computing started life as a scale-out storage platform and then added compute.

Scale Computing has a hyper-converged appliance called HC3 running on KVM so offering an alternative to the behemoth that is VMware and Microsoft. The HC3 name comes from Hyper-Converged 3 (being 1:servers, 2:storage and 3:virtualisation). Their marketing is all about reducing cost and simplifying virtualisation complexity and is ideal for those who haven’t adopted virtualisation due to cost and complexity or are looking for a new alternative and reduced cost. They generally target SMB size workloads but this can still grow fairly large.

Read more…

VMworld 2014 US: VSAN Architecture Deep Dive #STO1279

August 27th, 2014 No comments

Quick notes while attending VSAN deep dive by Christos Karamanolis, the architect of VSAN and Christian Dickmann one of the lead developers for VSAN.

They went into the technical details of some of the functional components and how VSAN decides to distribute data across the cluster to meet availability and performance requirements, they showed some of the performance numbers.

VSAN key benefits, radically simple, high performance, lower TCO.

VMware increasingly sees vSphere admins also managing storage and VSAN is targeted at them.

VSAN performance is with very low host CPU performance

2M IOPS for 100% read possible with 4PB of disks, 640k IOPS with 70% read, 30% write

VSAN clusters aligned to vSphere clusters for ease of management rather than a technical limitation

VSAN policies it can present:

  • object space reservation
  • number of failures to tolerate
  • number of disk stripes per object
  • flash read cache reservation
  • force provisioning

Went through disk layouts and use of flash devices

VSAN asynchronously retires data from flash write buffer to HDD sequentially for performance

With VSAN license you get vSphere Distributed Virtual Switch even if you don’t have Enterprise Plus licensing

VSAN is an object store not file store

VM home directory object is formatted with VMFS to allow a VMs configuration files to be stored on it, mounted under the VSAN root directory, this is similar to VVols

Advantage of objects:

    • storage platform designed for SPBM
      • per VM per VMDK level of service
      • application gets exactly what is needs
    • high availability
      • per object quorum
    • better scalability
      • per VM locking, no issues as number of VMs grows
      • no global namespace translations

SAN write stays in write buffer for as long as possible as it often changes after initial write so is kept in cache

Host load balances VSAN reads across replicas but only reads block from same replica to keep single cache copy

VSAN remote cache read latency negligible as local SSD latency increases anyway with more data

VSAN supports in-memory local cache for very low latency, used with View Accelerator (CBRC)

VSAN has a scheduler that throttles replication traffic in the cluster but will always leave a little room so replication can at least continue

HA has been heavily modified to work with hyper-converged and VSAN

VSAN gives users 3 options for maintenance mode

  • ensure accessibility
  • full data migration
  • no data migration

VSAN monitoring and troubleshooting with:

  • vSphere IO
  • command line tools
  • Ruby vSphere Console
  • VSAN Observer.
Categories: Storage, VMware Tags: , ,

VMworld US 2014: The Day 2 Buzz

August 27th, 2014 No comments

image

Another Run VMworld with an ever bigger group and plenty to talk about.

IMG_4801

American style breakfast, hey there was fruit though!

IMG_4805 IMG_4802 IMG_4804

General Session

IMG_4806 The second general session which is usually the more technical show-and-tell of the mass presentations was led by VMware’s CTO Ben Fathi making his first VMworld keynote appearance. Wearing jeans and talking to the engineers in the audience, his job is to show some of the technology announced. He went through the story of businesses stuck in silos battling the change from traditional apps to cloud-native apps. VMware wants to make things much easier to deploy all kinds of workloads from your private data center using vCloud Suite to Public cloud with vCloud Air but with a common management framework and toolset covering both. Quite a bit of time spent talking about the power of “and”, saying you can use multiple things (hybrid cloud) over having to make a decision and being stuck with “or”.

Read more…

Categories: Cloud, EUC, Storage, VDI, VMware, VMworld Tags:

EVO: Rail – Integrated Hardware and Software

August 25th, 2014 No comments

image

VMware has announced it is entering the hyper-converged appliance market in conjunction with hardware partners for them to ship pre-built hardware appliances running VMware software. See my introduction, VMware Marvin comes alive as EVO:Rail, a Hyper-Converged Infrastructure Appliance.

image Each of the four compute nodes within the 2U appliance has a very specific minimum set of specifications. Some hardware vendors may go above and beyond this by adding say GPU cards for VDI or more RAM per host but VMware wants a standard approach. These kinds of servers don’t exist on the market currently other than what other hyper-converged companies whitebox from say SuperMicro so we’re talking about new hardware from partners.

Each of the four EVO: RAIL nodes within a single appliance will have at a minimum the following:

  • Two Intel E5-2620v2 six-core CPUs
  • 192GB of memory
  • One SLC SATADOM or SAS HDD for the ESXi boot device
  • Three SAS 10K RPM 1.2TB HDD for the VSAN datastore
  • One 400GB MLC enterprise-grade SSD for read/write cache
  • One VSAN certified pass-through disk controller
  • Two 10GbE NIC ports (either 10GBase-T or SFP+ connections)
  • One 1GbE IPMI port for out-of-band management

Each appliance is fully redundant with dual power supplies. As there are four ESXi hosts per appliance, you are covered for hardware failures or maintenance. The ESXi boot device and all HDDs and SSDs are all enterprise-grade. VSAN itself is resilient. EVO: RAIL Version 1.0 can scale out to four appliances giving you a total of 16 ESXi hosts, backed by a single vCenter and a single VSAN datastore.There is some new intelligence added which automatically scans the local network for new EVO:RAIL appliances when they have been connected and easily adds them to the EVO: RAIL cluster.

Read more…

EVO: Rail – Management Re-imagined

August 25th, 2014 2 comments

image

VMware has announced it is entering the hyper-converged appliance market in conjunction with hardware partners for them to ship pre-built hardware appliances running VMware software. See my introduction, VMware Marvin comes alive as EVO:Rail, a Hyper-Converged Infrastructure Appliance.

image The EVO: RAIL management software has been built to dramatically simplify the deployment of the appliances as well as provisioning VMs. The user guide is only 29 pages so you can get an idea of how VMware is driving simplicity. Marvin actually exists as a character icon within the management interface with an embedded “V” and “M”.

VMware recognises that vCenter has had a rather large feature bloat problem over the years. They have introduced new components like SSO which do provide needed functionality but add to the complexity of deploying vSphere. VMware has also tried to bring all these components together in the vCenter Server Appliance (VCSA).

This is great but has some functionality missing compared to the Windows version like Linked-Mode and some customers worry about managing the embedded database for large deployments. As EVO:RAIL is aimed at smaller deployments and isn’t concerned with linking vCenters together, the VCSA is a good option and the EVO:RAIL software is in fact a package which runs as part of the VCSA. There is no additional database required, it is all built into the appliance and uses the same public APIs to communicate with vCenter but acts as a layer to provide a simpler user experience, hiding some of the complexity of vCenter. vCenter  is still there so you can always connect directly with the Web Client and manage VMs as you do normally and any changes made in either environment are common so no conflicts.

EVO:RAIL is also also written purely in HTML5 even for the VM console, no yucky Flash like the vSphere Web Client and it works on any browser, even an iPad. Interestingly is has a look which is a little similar to Microsoft Azure Pack. Who would ever have thought VMware would have written a VM management interface built for simplicity that is similar to an existing Microsoft one!

Read more…

VMware Marvin comes alive as EVO:Rail, a Hyper-Converged Infrastructure Appliance

August 25th, 2014 1 comment

image image VMware will announce shortly today at VMworld US that it is entering the hyper-converged appliance market with a solution called EVO:Rail. This has been rumoured for a while since an eagle eyed visitor to the VMware campus spotted a sign for Marvin in their briefing center. Marvin was the engineering name and has still stuck around in parts of the product but its grown up name is EVO:Rail.

EVO(lution) is eventually going to be a suite of products/solutions, Rail is the first announcement named for the smallest part of a data center rack, the rail, so you can infer that VMware intends to build this portfolio out to an EVO:RACK and beyond.

EVO:Rail combines compute, storage and networking resources into a hyper-converged infrastructure appliance with the intention to dramatically simplify infrastructure deployment. Hardware wise this is pretty much what Nutanix and Simplivity as two examples do today. Spot the acronym, HCIA, to hunt for newly added VMworld sessions.

VMware is not however entering the hardware business itself, that would kill billions of marketing budget spent on the Software Defined Data Center message of software ruling the world. Partner hardware vendors will be building the appliance to strict specifications with VMware’s EVO:RAIL software bundle pre-installed and the appliance delivered as a single SKU. Some may see this as a technicality. VMware has always said if you need specific  hardware you are not software defined. Does EVO:RAIL count as specific hardware?

Support will be with the hardware vendor for both hardware and software with VMware providing software support to the hardware vendor at the back-end.

image

Read more…

What’s in PernixData FVP’s secret sauce

July 31st, 2014 No comments

Anyone who manages or architects a virtualisation environment battles against storage performance at some stage or another. If you run into compute resource constraints, it is very easy and fairly cheap to add more memory or perhaps another host to your cluster.

Being able to add to compute incrementally makes it very simple and cost effective to scale. Networking is similar, it is very easy to patch in another 1GB port and with 10GB becoming far more common, network bandwidth constraints seem to be the least of your worries. It’s not the same with storage. This is mainly down to a cost issue and the fact that spinning hard drives haven’t got any faster. You can’t just swap out a slow drive for a faster one in a drive array and a new array shelf is a large incremental cost.

imageSure, flash is revolutionising array storage but its going to take time to replace spinning rust with flash and again it often comes down to cost. Purchasing an all flash array or even just a shelf of flash for your existing array is expensive and a large incremental jump when perhaps you just need some more oomph during your month end job runs.

VDI environments have often borne the brunt of storage performance issues simply due to the number of VMs involved, poor client software that was never written to be careful with storage IO and latency along with operational update procedures used to mass updates of AV/patching etc. that simply kill any storage. VDI was often incorrectly justified with cost reduction as part of the benefit which meant you never had any money to spend on storage for what ultimately grew into a massive environment with annoyed users battling poor performance.

Large performance critical VMs are also affected by storage. Any IO that has to travel along a remote path to a storage array is going to be that little bit slower. Your big databases would benefit enormously by reducing this round trip time.

FVP

Home

Along came PernixData at just the right time with what was such a simple solution called FVP. Install some flash SSD or PCIe into your ESXi host, cluster them as a pooled resource and then use software to offload IO from the storage array to the ESXi host. Even better, be able to cache writes as well and also protect them in the flash cluster. The best IO in the world is the IO you don’t have to do and you could give your storage array a little more breathing room. The benefit was you could use your existing array with its long update cycles and squeeze a little bit more life out of it without an expensive upgrade or even moving VM storage. FVP the name doesn’t stand for anything by the way, it doesn’t stand for Flash Virtualisation Platform if you were wondering which would be incorrect anyway as FVP accelerates more than flash.

Read more…